Hacker News new | past | comments | ask | show | jobs | submit login
I don't want to host services but I do (ergaster.org)
174 points by thibaultamartin on Aug 9, 2023 | hide | past | favorite | 169 comments



> My recommendation to most people putting services online would be: either do it for yourself only, or do it as a team with proper structure and processes. What sounds like an initiative to emancipate people could actually alienate them to you, and that is a huge responsibility.

Oof, good advice. I run a startup that helps folks self-host, but it really does split the audience in two. Folks technical enough to swallow the somewhat rough edges become huge fans and part of a fun community. Folks just on the other side of that split tend to have pretty frustrating experiences...

I dearly wish I had the capital to be able to spend another full-time year on making our product better, but self-hosting is a really tricky thing to build a company around - the audience by definition is looking to avoid paying for services!

I do still fully believe (and hope!) that one day, far from now, self-hosting reliably will be trivial, and our kids will all think we were a bit slow for relying on a few megacorporations hosted services.


I think the main problem is that ordinary people don't even see what problems self-hosting is supposed to address; and those that do, still need to dedicate significant time and effort to "tinkering", even when handed a huge chunk of the solution on a plate.

Another huge problem is that there's a home network between your product and the user's other devices; most home networks are utter crap, and often even tech-savvy people don't have a whole lot of control over it (I hate my ISP's modem with passion). This seriously limits your potential to provide an excellent UX; IMHO it's the UX that makes or breaks a product for "the rest of us".

I used to self-host a whole bunch of things on a VPS, including my blog, git repos, a DIY blogroll / RSS reader, etc. In the end I've decided it was not worth the effort; the blog was moved to Netlify, repos to Github, and the RSS kludge got swapped for NetNewsWire with iCloud sync. I was paying €5 for the VPS, yet now I'm paying Apple €20 to host my email, sync my photos, get access to the music catalogue, etc. I would definitely pay €20/mo for a box under my desk + an online service, provided it gives me similar value without much additional effort.

I think the problem that KubeSail/PiBox is aiming to solve might be both too broad (run any software you like!), and too narrow (if you're an enthusiast!) at the same time. I don't want to run Miniflux; I want to have my RSS feeds synced between devices. The software that pushes the bytes (and the hardware it runs on) should be invisible - unless I decide (out of my own free will / curiosity) to pop the cover open and start tinkering.

I don't think you can solve this by addressing shortcomings in a single piece of the stack. Both the layer below you (your average home network), and above you (the apps) have their own problems; some are like splinters (tiny but enough to ruin the experience), some are fundamental ("what is MySQL and why do I need to know"). I don't think it's a lost fight, but I would try to start with a vision for a more vertically integrated solution; maybe one step of that road is to eventually build your own WiFi AP/router (or even become an ISP), maybe to make a deal with Spotify (or even directly with EMI/WB/etc)... I don't think a task is too big if you can seriously challenge Apple/Amazon/Google at the end of the road.


I agree! Unfortunately, we pivoted to self-hosting right around the time we were running out of money, and around the time I had a child and thus, needed money. I'm really glad we pivoted to something we love and our users love, but it hardly pays the bills.

I've spoken with several people who are starting similar companies and who've reached out to me (happy to do that!) - my advice is similar to yours: keep it simple, keep it focused. KubeSail is a developer tool turned home-hosting tool, but if I could rebuild it, I'd make it incredibly simple to get Jellyfin and a torrent/VPN client installed and that's about it, and then execute insanely hard on making that as streamlined and foolproof as humanly possible.


I'd make it incredibly simple to get Jellyfin and a torrent/VPN client installed and that's about it

if you do that you'll have the film industry coming after you very quickly.

i'd like to sell home users anything else but not that particular product.


So do you think there are enough people/companies willing to pay for that streamlined experience?


I think if you could sell an as-easy-a-chromecast box that could do jellyfin, had a nice ui for uploading local media, and had an easy guide or built in VPN/torrent client, you’d be to build a great business.

Of course - you can’t exactly vendor torrent stuff - and I’d never suggest anyone to pirate anything. But certainly the sky is the limit, and that’s just media. Other tools like Monica CRM, Tandoor Recipes, Mastodon, etc are their own markets too!

We’re too far in the technical side to be mass appeal, and our UI/UX is far from “mom-friendly”. Still - I’m optimistic a better entrepreneur than myself will conquer this one day.


the problem is that creating a userfriendly product before you can start earning income means a lot of upfront investment which is risky.

i'd rather take a product like yours and improve it. what's stopping you from doing that?


A nontrivial full time job, a wife, a toddler, an infant, a mortgage, and my own fatigue.

I will, eventually, have the capital to continue.


There's already a huge market for SOHO network gear, NAS / media appliances, etc. and all of that needs to be usable by non-experts. We need to ask the other question: what motivates people to self-host?


> I hate my ISP's modem with passion

What is their modem doing that you haven't been able to work around?


It's a modem+router+switch+AP that technically does everything you need, but does all of it badly - really just getting in my way. E.g. it obviously has a builtin DHCP server, but it won't let me set custom DNS. I want to use custom DNS, to block at least some of the ads/tracking on all of the devices on my network; so I have to disable that DHCP and use my own. But the modem resets that setting back to enabled every reboot! (Took me a while first time around to notice there's two DHCP servers on the network, argh). So I've disabled the internal AP, brought my own, and I'm connecting the rest of the network through a managed switch that blocks DHCP to the router.

So I've got one device that tries to do the job of four... but instead I need three devices to do the job of one. I try not to think about it.


Yeah I just have hardware I like running openwrt behind the modem doing all three jobs, with Tailscale Funnel doing the job of port forwarding so I basically never need to interact with the provided modem's interface. Because you're right, it sucks.


I used to do double NAT but the thought wouldn't let me sleep at night ;)


I'd flash custom firmware at that point. Be aware that i suck at networking however. Is there a compatibility/hardware-related issue?


The problem is the builtin cable modem. I don't have the parameters/credentials, I have next to zero experience wrangling DOCSIS, and frankly I just don't want to bother. Switching to fiber soon enough anyway.


This is typical of ISPs in France: you get a crappy device that does router + switch + wifi but none of them are done right.

There is a whole community around replacing these devices with other devices (owned by the users) but ISPs make it difficult despite a European law that forces them to allow this.

I used to have an Orange box that was hanging on a regular basis when there was too much traffic. Had to reboot it nightly.

I know have a high-end box from another provider that is OK, but I still do my own DHCP.


>I [now] have a high-end box from another provider that is OK, but I still do my own DHCP.

Freebox Delta?


Yes. It is really not bad. I used to be in the "replace your box" camp (I replaced the shitty Orange box) but with the Delta I use it as expected. Not to mention that it can run VMs so I put Pihole there too (in addition to my home server).

It also forced me to quickly learn IPv6 :)


It’s been a long time since I’ve used my ISP’s supplied modem/router. The only hardware from my ISP that I’m using now is their GPON SFP module, which is installed in the NIC of my router machine.


Hmm.. you mind talking a bit more about it? You are just consulting them or getting your hands dirty as well?


About my business? Sure! It's at https://kubesail.com and we sell our hardware at https://pibox.io (the software works with almost anything that can run Linux tho!) :)

Our best feature is that the website will detect if you're on the same network as your machine and if so, offer "local" links instead of remotely proxied ones. That way non-technical users dont need anything fancy or to be aware of how NAT traversal works. On top of that, the "local" urls still get valid HTTPS certs for free, so non-technical users dont get any scary browser warnings.

We started out as a way to make self-hosting easier for corporations, and were doing consulting work, but the users who joined our community were mostly home-hosters, so we leaned into that! Jellyfin is now our most popular app.


Given the market that you're after, why sell it as a SaaS? The people that want new subscription services, and the people that want to self-host feels like an empty set. Why not do the more traditional model of selling version 1 of the software for $x, and then when version 2 comes out, sell that for $y, and people with version 1 can pay $z to upgrade, where z < y.

The math could work out to be the same, but the psychology of marketing is everything. If I, as a hard-core-self-hoster, pay $60 for a version 1 of software that I can use forever, and version 2 comes out a year later, and I pay $60 for that; I'm much happier to do that, compared to having to pay $5/month for yet another subscription service, even though that's exactly the same amount of money. I already have so many subscription services! I don't want to pay for another one!


That’s effectively the tactic with the hardware - however - home hosting does need a SaaS imo! Between remote access (proxying ala cloudflare tunnels), backups, VPN, there are nontrivial ongoing services that need to be in place for happy home hosting.

That said - you’re not wrong at all and that’s why our service is totally optional and our free tier is quite generous.

The saas for sure made more sense when our target customers were companies.


The UX for a non-technical user to setup a port forwards is a total non starter, but self-hosted remote access and VPN are well covered by Tailscale + Tailscale Funnel these days. Backups for data is an actual service that people pay (and could get paid) for.


If you could provide similar functionality / workloads as the old MS Small Business Server isn't there a MSP / reseller market opportunity? Self hosting for businesses that are < 25 users. They still tend to be serviced by small MSP. They've gone cloud because that's where their vendors went. Many hate it.

You'd need

* Directory/Authenticaion

* Email

* Shared Folders

* Wiki / Web something

If there were options for adding IM, PBX, CRM it could be a compelling offering for resellers.


> 5-bay and desktop HDD compatible models are under development and will be coming soon.

The box does look pretty. Any plans for dual/multiple ethernet versions? At a quick glance the Pi compute module doesn't have any so you must have added the lone one yourselves?

And of course the geek in me would like to know the network chips and how they're connected to the compute module (although I guess usb is the only choice).


The order page says "pre-order your pibox", but later says in-stock, and next-day shipping.

Very tempting looking product!


For me self hosting is about taking responsibility and having the option of identifying and eliminating faults and threat surfaces myself. This is very much extra that will consume time, energy, and attention. With all this my expectation is self hosting ultimately amounts to premium service which is likely to cost more starting with the hardware and management of it. Seems like this is not a big sector, but a potentially profitable one with some very committed clients.


Would you happen to know why your customers choose to self host? There's a myriad of potential reasons, and I'm curious which ones are the primary ones.


Genuine q. The main thing stopping me from self hosting is security. Having a box in the cloud get hacked as long as data is properly encrypted and secured - not good but also can easily destroy and spin up anew.

But having your home server hacked and then presumably your entire home network and everything in it - seems way too fraught to even attempt it.

Thoughts on that? Am I just too unfamiliar with network security and this actually solved now — and there is already a well-defined trusted approach to this?


A webserver like nginx hosting a static (files in folders) website is incomparably more secure and less of a risk than say, opening your web browser and going to a website without disabling javascript execution. The number of nginx remote exploits in the last decade could be counted on one hand, probably without using all the fingers.

The mistake many make at the start is trying to run a complex web application backend with php or databases or whatever. Or using some "easy" container all-in-one containing these complexities. Maintaining the security of that is a neverending diffcult task. Whereas maintaining nginx installed from your OS repos literally requires no work at all. KISS and you'll be perfectly safe.


If you are just hosting static files, you could drop it on github pages and it would work perfectly for free.


For all its faults, the term "zero trust" applies here - treat your local network as untrusted.

Historically the security of Ethernet, IEEE802.11 and other such protocols has been full of half measures, laughably weak crypto and whatever WPS is supposed to be. Look at the history of wireless security if you want to have a good laugh.

In the application layer, on the other hand, we have rock solid solutions like SSH which remain the gold standard for security.


That sounds like an ideal more than a reality. Windows has separate "Home network" firewall settings that it automatically detects, and I assume lots of other consumer devices make similar assumptions.

Yeah you can lock everything down, _if_ you're careful, _if_ you don't mess up, _if_ some consumer hardware doesn't have a vulnerability


With 3 routers you can isolate your home network from external-facing services very securely.

https://www.grc.com/sn/sn-545.pdf


Haven't read yet but I found the transcript in HTML which is easier to read on a phone: https://www.grc.com/sn/sn-545.htm


TLDR

Untrusted devices behind one router, trusted devices behind another router, both routers behind a third. The routers should be dumb, rock hard, and nat. If the untrusted devices were behind just the outer router they could potentially intercept trusted traffic traversing that network. If the trusted devices were behind just the outer router, I guess the untrusted devices might somehow use IP tricks to enumerate devices or something.

They mention vlans, and say it's basically a homemade vlan. Why not use vlans then? No mention of DMZs. Or if you have a single router with configurable firewall, couldn't you just firewall traffic between untrusted and trusted ports? I'm not sure of the context of this idea. Do they make cheap routers with enterprise-level hardening that don't support firwalls?


You have to cheat and compromise your morality somewhere to make it work with decentralizing, I've found. Here, the answer is a Cloudflare tunnel. Hail corporate.


Same concern here. Also not just hacking the box, but if they figure out your service's ip it's the same as all your other stuff. Would ipv6 help this? Each device gets a different ip so there shouldn't be correlation, but could people make assumptions about ipv6 prefixes to discover other hosts on your network?

Is separate physical hosts a real improvement in security? It seems like a real air gap vs relying on linux hardening. Lots of raspberry pis (something cheaper now?) vs one larger home server hosting multiple services.

My consumer router has a dmz mode, but I'm not sure how far I can trust it. I guess it's a good thing nobody uses any of the stuff I host.


That's just the reality of it. You're self hosted box will never be as secure as something hosted by Google where they have teams of people working full time on securing every single layer of the stack right down to finding bugs in the CPUs that their servers run on.


Is it though? The more complex a setup, the more of an attack surface. Even stuff like social engineering, tricking their support into giving access to your server is a possibility.

YouTube channels get hacked so often, even from technical people like Linus Tech Tips.


A set of valid points especially

    As self-hosters we are not going to change the face of the world. The other 98% of the general public is going to use hegemonic services: self-hosting is a privilege for those who have the education, time and money to put into it. We’re only deploying solutions that work for us, individually.


It's all niche stuff that only a few people use until there are watershed moments like the Twitter and Reddit fuck ups that push large swathes of users to look for an alternative. Then suddenly it's not a niche product and it's important that the kinks, bugs, and onboarding has been worked out during those years of being niche.

People are absolutely getting sick of subscriptions. It's also getting easier to self host. Tailscale has been a game changer for me personally as I just had no confidence in getting my services working correctly over the internet without getting pwned


> It's all niche stuff that only a few people use until there are watershed moments like the Twitter and Reddit fuck ups that push large swathes of users to look for an alternative.

And then after poking around for a week, they go back to Twitter and Reddit.


As something of a dumdum myself I think I know why. Corporations want people to be able to do their thing as easy as possible to make money, while people not directly motivated like that and not motivated to make it as easy as possible can do anything else. So instead of "make sure it's just a button click" it's "what, you didn't read all 580 pages of the documentation and all the changelogs and the code on github and compile it yourself on a custom built $40,000 machine? We don't help your kind around here go away" and yeah people go right back to windows or twitter or whatever.

Jokin aside I'm just trying to explain there is a real problem there. Feeling smug about the result of that problem doesn't fix it but it is really easy to do


The difference between self-hosting most things, and Twitter and Reddit (and Facebook and Slack and Discord) is the network effect. If I wanted to self-host my pictures that I share with friends, I can still just send them the URL. They might be annoyed that they're not on Instagram and have to use a web browser instead, but the people that want to see how my long weekend went will go see the pictures. To self-host something like Reddit, I need to convince other people to change their habits and their choice of platform. As not-a-million-dollar-corporation, my ability to have a polished UX is rather more limited, so I can see why someone would go back Twitter and Reddit.


The other thing to consider is that self-hosting is not a binary option - there are degrees to it. On one end, I can upload a Docker image/OCI tarball to a cloud provider and get a service up and running with plenty of application-level customization. Somewhere in the middle, I can get a private server and have a bit more low-level control over my deployments, like tweaking some sysctl parameters, or running a custom-built Linux kernel. On the other end, I can literally buy my own rack server, with all the hardware I need or want installed in it, and send it to a colo for hosting and upkeep (or build my own data center, if I have the money).


Stuffing it into your basement, which is the "build your own data center" option, isn't prohibitively expensive, not does the hardware have to be. There's a gulf of prices between a Raspberry Pi and a new Dell or HP server. On top of that, getting 5 nines of uptime is costly, but we're not trying to self host Google.com here. If my personal file server goes down, my friends'll eventually notice but we're talking about a service that gets 0 rps (requests per second) when all of us are all sleeping, so no nines is sufficient. More would be great, but like you said, it's expensive.


This is the best route, in my experience.

If you're interested in tech or gaming, you usually accumulate hardware anyways - putting the old stuff to use just makes sense in most cases.

And I actually don't really agree with the article - My issue with SaaS products is not privacy. My problems are quality and consistency. My self-hosted stuff doesn't auto-update to a version that's less capable or dumb itself down to shove users into advertising flows or "new" features they want me to use. 7

It's not about privacy - it's about having the computer serve me. It's the difference between a free "financial advisor" peddling scams vs a paid agent with fiduciary duty.


My SLA is only to myself, so it’s not so bad. That said I do host some “critical” services like my password manager, and so making a mistake that takes those down can be a pain. I figure on balance it has been worth it.


just FYI, I've been running a few services on a Hetzner VM and a few others in a box at home. With an 5 minute uptime ping (uptime robot) I am consistently getting a 3 to 4 nines uptime for the home server and that can be substantially improved if I care to put an UPS (my old one failed). As others pointed out, home ISP is often not great for self hosting due to things such as asymmetric transfer speed, periods of higher latency and short, but sometimes frequent outages.


The other benefit is that using self hostable software makes it harder for centralised deployments from screwing users over.

It is harder (but not impossible, and not without it's own inconvenience) for mastodon.social to do a rug pull because there are near-identical alternatives that others (or yourself) host.


This goes both ways. Depending on organizationally managed services results in tools that are honed for institutions. Having self-hosters in the mix generates demand for simplified products that can offer equivalent capabilities with a minimum of effort and oversight. This potentially shifts the entire tool development ecosystem toward humans instead of large groups that have effort to spare.


I think there’s a middle ground for cooperatives but the old problem of fairness rears its ugly head. I don’t want to pay for 20% of something if I’m only getting 5% of the benefit.


If the cost to help maintain the thing is something nominal (say, $20/mo, even as much as $40/mo) to maintain, then I see it as as form of mutual aid and am happy to pay it to support my friends and friends-of-friends.


As long as one guy isn’t getting 80% of the benefit, I’m game.


Great article. While it mentions monitoring, it took me a long time to appreciate how beneficial it is to do monitoring really well. Things like:

• Knowing when disk space, inode usage, or memory usage get high, long before it’s an emergency.

• Automated monitoring of SSL certificate expiration dates, letting you know days before a certificate expires. Whether or not you use something like certbot, have a separate process that automatically tells you a certificate is close to expiration.

• Automated periodic end-to-end testing of moving parts. Like if you run an email server, a process that sends something from your server to a gmail.com address, and then checks the gmail.com inbox to find the message.

• Automated periodic testing that unexposed ports remain unavailable from outside the device or private network.

• Automated checking that a Linux instance is successfully checking for and installing security updates, and is not waiting for a reboot. • Automated checking that backups are working as expected. You might not be able to automate periodic restore testing, but at least check that backups do not appear to be silently failing. • Separating out low priority alerts from high priority alerts. You want to get woken up when necessary, but not for an issue that can wait until you are at your desk.


Aside from (and secondary to) monitoring, one thing it took me years to realize the benefits and ease of setting up early and i think other selfhosters commonly neglect: caching proxies and removing default internet routes.

Benefits include:

- Security

- Ease of configuring traffic control: As long as you're not redirecting UDP (have fun lol), steering apps with HTTP or SOCKS5 forward-proxies is so much more straightforward than routing.

- Performance/effieciency (global package cache for your network!)

- Resilience (apt upgrades and docker image pulls can keep working despite your entire network being offline)

My rough starting kit for a Linux-based network here would be:

- Some caching forwarding internal DNS server. If you already have an internal recursor or forwarder great, but it's good to let the DNS server serving your clients be separate anyway. dnsmasq/unbound/technitium/coredns/powerdns/yadifa.

- Internal NTP for syncing time. May be provided by your DNS or DHCP server already. chrony is good.

- apt-cacher-ng or other caching forward HTTP proxy for your apt/dnf/pacman/apk/whathaveyou updates.

- docker-registry-server in mirror mode and set up as mirror for any docker/podman hosts you have.


Do you have any recommendations or resources you think are great for learning more about this? I think I’m right at the beginning of this journey and looking for where to start.


I wish I did. My approach is that I have a ruby script that runs every five minutes and does a bunch of tests. The script takes a couple minutes to execute. It connects to servers via SSH to check things out, does end-to-end-tests, then it writes its result to a JSON file.

It runs on a Linode instance with a webapp whose sole responsibility is to respond to Pingdom requests. There are two URLs that Pingdom looks for: one that returns a 500 if the JSON file indicates an issue that warrants texting me. A second that returns a 500 if the JSON file indicates an issue that warrants emailing me for a lower priority issue. Pingdom is configured accordingly.

If for any reason the JSON file has not been written in the past 10 minutes (?) or cannot be read and parsed, both URLs return a 500.

The script has a log file, so when I get an alert I can check the log file to determine what is wrong.

This is likely atypical, but it works really well for me. My scripts do the work of monitoring the heck out of everything. I only need Pingdom (or a service like it) to monitor two URLs and do the texting/emailing.

But my overall approach is to think of monitoring like unit tests or integration tests: when I think of something that could go wrong, I try to make sure there is monitoring that can detect it and alert me. When possible, before it becomes urgent. And when something does go wrong that is not automatically detected, it's a high priority to add monitoring around that.


I am basically in the camp of "it is impossible to have readily-accessible stuff that you don't have to constantly babysit".

I have a server in my basement with like 35tb of zfs storage to hold my blu-ray rips. The movies are backed up onto tapes, and those are more or less durable but not really readily-accessible (and kind of a pain).

A very large quantity of my time is spent mucking around with disks, and fixing data issues. Even when there's no data issues, there might be a transient read error which causes a fault and I have to spend time dealing with scrubs or at the very least checksumming files to make sure that they're fine.

A masochistic part of me kind of enjoys it, but honestly it's gotten to a point where I'm debating just paying some money to Hetzner or Amazon and selling off the servers.


35TiB on Hetzner or Amazon isn't exactly going to be cheap, but regardless of that, even if you don't give up your local server, I'd still ask what your off-site backup situation is. Two friend's houses got broken into (different cities) and had their shit stolen, and another had their stuff destroyed in a fire, so at some point, I added cloud storage for off-site backup into my strategy.


amazon storage appears to be about 10 times as expensive as a comparative server at hetzner.


Serious question: Why go to so much trouble to back up your blu-ray rips? Why not just keep the original discs in a binder / on a spindle, and re-rip them if your hard drive dies?


I have over 400 movies, and about 30 complete TV series. Ripping movies isn't so bad, takes about 40 minutes on my laptop, but ripping TV series is a huge pain in the ass.

Episodes appear to be stored in no particular order on Blu-ray, so I end up having to open the video file, pray for a title card, and match that against an episode order list on Wikipedia.

For some shows (like most British shows) this isn't too bad, since there's a very small number of episodes so re-ripping doesn't take too long. For other shows (e.g. Adventure Time), there's a million episodes, and correctly labeling them takes a lot of effort that I do not want to duplicate.

The thing is...I have re-ripped all my blu-rays. Twice. Because I didn't know what I was doing with ZFS and kept breakingu cluster. I don't really want to do it again.


https://github.com/nicholasadamou/plex-s3fs

Run it w/ Wasabi.

Cheap, durable, fast enough.


Sandstorm would have been nice, but I think a reasonable way to go nowadays might be to write software so that it’s easily deployed on Netlify or Deno Deploy and encourage people to fork your repo and run their own website.

You’re still writing software for others to use, but you don’t take responsibility for their uptime or content.

It’s a little bit of a barrier because you need to create two free accounts (including GitHub) and learn your way around. Part of open source in practice is education and I think teaching people enough so they can edit a file on GitHub would be empowering, even if that’s as far as they go.

Those are services I’ve used that have a free tier and seem pretty low-maintenance. What would be other good choices for this sort of thing?


This is a very complete article that covers most of the pitfalls of self-hosting services.

As someone who self-hosts a good chunk of the services that I use, I am in total agreement of the challenges that face the poor souls that seek alternatives to big tech.

From Oracle (allegedly) randomly shutting down instances [0] to Google doing A-B testing [1] on how to further lock down Youtube videos.

It truly is a treacherous journey for the self-hosters.

[0] https://gist.github.com/yewtudotbe/c16a69ddad88a37c2a364a5ff...

[1] https://github.com/iv-org/invidious/issues/4027#issuecomment...


I would love to build my next web project so it will not save any data on the server but let the user save it locally via the File System Access API.

That would give the user the same experience as with a desktop application. Full control over their data, saved locally.

The problem is that, according my tests, Firefox does not support it at all. Chrome does not support it on Android and Safari does not support it on iOS. Not sure about Safari on the desktop.

Here is a text editor demo which let's you try if it works with your browser:

https://googlechromelabs.github.io/text-editor/

If your browser supports it, it will let you load and save files just like a desktop application. If it does not support it, it will use a download/upload workaround.


There is a middle ground which all browsers do support, and not require permission prompts - Origin private file system https://developer.mozilla.org/en-US/docs/Web/API/File_System...

If you're not familiar it's a file-system like API for writing files to an opaque non-user-accessable file system. Your application could probably provide it's own export functionality using blob urls, and import using traditional file "upload".


The problem with these is that nobody has a single user agent anymore. Haven’t for years. If I need files I need them on my phone and tablet, or tablet and laptop. Those services have yet to become standardized.


> according my tests, Firefox does not support it at all

I just tried the text editor example in Firefox and it works fine for me, although all the newlines in my file were ignored so it looks like garbage. Maybe it assumes Windows-style line endings?

EDIT: Oh, no, it just doesn't support line endings at all? Even if I press the enter key I just get a space. Maybe it's just a proof of concept and not an actual working text editor.


I like that effort!

But it only addresses half of the value of self-hosting (which is much better than nothing). The other half is: being able to have control over the software itself, when/if it gets updated, being able to be sure what's done with the data (if you're sufficiently motivated), and not having the service become unavailable when the internet is out.


An alternative is to use Electron and ship your app with your own chromium.


One of the 'services' I host is a simple interface on a server that allows me to easily upload files and get sharable links to them.

At this point it's used by more than just me, a bunch of people in my circle use my instance to share files.

In case anyone else finds this useful: https://github.com/aaviator42/izi

There's a demo here: https://aavi.xyz/proj/fakeizi/


I don't mind self-hosting, but I dream of a world where FOSS desktop and mobile apps have p2p sync (maybe with CRDT) so that everyone could use them without hosting, even my mom.


Syncthing is partly in this direction.


Yeah, Syncthing is awesome. Even more with untrusted share.


I used to expose my services to the internet. Now I use WireGuard through OPNsense to connect remotely. The attack surface is small and I’m still even able to stream videos that are located at home.

I’m not a security expert but it makes me feel like keeping software up to date is less urgent. That lets me stick to one version for a while once it does everything I like. The stability of experience and ease of use is greet.


Another option is to stick an authenticated reverse proxy in front of whatever you’re hosting. No software needed on the client beyond a web browser, and you only need to worry about say the security of Nginx…


I have created Elestio (https://elest.io) to address this pain, we take care of all aspects (infra, deployments, security, dns, smtp, backups, monitoring, alerts, updates, migrations ...) and we do it for a catalog of 233 open source software and also for CI/CD pipelines to deploy your own code from a Github/Gitlab repo


It was exciting until I saw the pricing for self hosting...

"Pricing is based on the specification of the VM you are connecting:

$5 per vCPU + $2.5 per GB of Ram + $0.25 per 10GB of disk.

Example: if you are connecting a VM with 2 vCPU + 4GB Ram + 40GB Disk, the cost per month will be:

(52) + (2.54) + (0.25*4) = ~$21/mo

Price per hour is then calculated like this: $21/730h = $0.02876/h

You can create one BYOVM service for free. To be eligible the VM you connect must have no more than 2 vCPU, max 4 GB of ram and max 80 GB of storage." https://docs.elest.io/books/cloud-providers/page/byovm-bring...


I understand your point, but we have to maintain a huge amount of OSS and also provide human support when needed. That's why it's not free. FYI we do publish most of our stuff open source on GitHub (https://github.com/elestio-examples/) and also on docker hub (https://hub.docker.com/u/elestio). Although some companies prefer to pay us to get peace of mind and a strong DevOps team to help them when critical stuffs going wrong


I understand your need to maintain profitability. However this pricing is structured for corporate. BYOVM home enthusiasts expect a flat fee that's easy to understand. Pricing based on hardware scaling isn't really acceptable for that market.


This looks great! I see on your website that you have corporate users. Do you see a lot of interest from companies for this kind of product?


Yes we do, especially for exotic software when we are the only managed option available on the market


Wow this looks really good. Well done! Good work!

Could you share how you think you compare to cloudron? Are you kind of a IaaS host coordinator?


Thanks for your kind words :) Main difference is the human support, catalog size, CICD from Git and we help users in case of anything going wrong or with advanced customizations. Also we include Hosting + management fees + human support in a price per hour, we also have BringYourOwnVM and BringYourOwnAWS account options.


I have a few hand-written node apps with a small express API. Can I deploy those on elest.io without having to worry about the underlying OS and its security and network setup? That would be very interesting for me. Kind of like a simple and cheap webhoster with php: Upload your files, forget about the rest. Is it that what you offer?


hi, fellow Elest.io user here. Yes, that is a perfect use case, aside from a regular OSS deployment. The CI/CD feature is exactly what you’re describing - it’s basically a private netlify/vercel, but can run express on a server (vs. serverless only/no express)

I myself run a Next.js app on it, it’s amazing how much faster it is than Vercel… even on the cheapest VM (1 CPU / 2 Gb RAM) I now get instant page loads vs. long long seconds of waiting with Vercel.


I've recently mentioned* that I believe the serverless model to be a great fit for self hosting needs.

It enables a kind of bring your own account (BYOA?) installation process. Where self-hostable services would be entirely built based on managed services.

- Infrastructure as code. The installer takes in any <cloud_vendor> account and provisions + configures the required components

- High availability built in

- no need to support old or niche hardware

- On-demand costs structure. Many self-hosted services don't need to run 24/7

My biggest fear with raspberry pi or VPS is the security. But self-hosting does not mean my-server-hosting. Some amount of vendor lock-in is acceptable and using the same APIs and processes as enterprise users sounds like a win. At least compared to not self-hosting at all.

Of course many things are still missing:

- self-hosted tools that actually work like this

- connection between data center and home. To integrate with smart home/IoT and similar things

- a reliable billing model for less technical users. It has to be impossible to rack up huge cloud bills

For now I guess it's just not yet mature enough. But I would like to see the serverless mentality finding it's way into self-hosted software communities.

* https://news.ycombinator.com/item?id=36986980

An example of what I mean: https://github.com/full-stack-serverless/conference-app-in-a...

I don't see any reason why that shouldn't also work for more typical self-hosted applications


The problem for me is there's no "generic" serverless stack, so you're then locked into one single vendor which defeats a lot of the personal control appeal of self hosting for me.


For me, it turned our that buying hardware and hosting Minecraft server in a garage is simply much cheaper comparing to Cloud providers. This is why I am for self hosted stuff.


I do as well and I want BUT I dislike two key facts:

- the development and use of services useful at small scale is essentially ceased in the last decades, meaning it's harder to keep up. We still have emails (even if current antispam solutions makes hard to have personal mailserver able to communicate with anyone) but feeds are more and more useless since most sites or do not offer them or publish just titles and ads and so on;

- older services got abandoned and modern ones try to mimic the giants ones, being needlessly complex and heavy for personal use.

Let's talk clear:

- we do not have modern MUAs, comfy enough. Yes, we have notmuch-emacs, Mu4E, but a proper setup demands few hundred SLoC at least, not something as simple a state: this is the root dir to downloads all my messages, keep them on server or delete, few filters and auto-refile rules, remote credentials and stop;

- we do not have file sharing stuff the easy way, the least obscene is WebDAV that's supported by most OSes, but most people do not know it, so we just need web-apps to mimick a file manager Google Drive alike to makes others able to reach our files;

- we do lost most of the desktop computing model, with people on limited and limiting mobile devices, who happen to be integrated only with cloud crap;

- IPv6 is not that widespread in the form a a global per any device, and personal domains are not much used by most.

Technically ANYTHING needed is there, but since most people do not know it and some bi&powerful want anybody on their servers we essentially have very little margin of maneuvers.

Modern telephony is old classic VoIP, but most carriers do not offer few settings to connect any softphone or a personal PBX (Yate/Asterisk) to them, mails are still there, but for most mails means webmails, some big vendors have even buggy IMAP (GMail) or no IMAP/POP at all (TutaNota) or try to push their new favorite protocol (Proton Mail/JMAP). The value of having messages managed on personal iron, locally indexed, having a ___domain name with various subdomains and so on is unknown to most. Cars nowadays have wifi and mobile connections but nothing to be directly connected to their formal owner, anything goes through the OEM server, who happen to be the substantial owner.

In the 2030 "you'll own nothing" is a THREAT TO THE HUMANITY but most seems to like it and few like the profitable outcome of that. That's the real issue.


I've tried leveraging several self-hosting solutions like TrueCharts(A app catalog fop TrueNAS SCALE) or Unraid both of which can use the ZFS.

Neither application has a built-in mechanism for backing up your application data that is anywhere near user-friendly.

TrueCharts does not have the GUI backup mechanism besides to protect user data per application. One application that becomes corrupt? Good luck, you have to restore all applications to a single point in time if you used with their commandline tool.

Unraid does not have any sort of backup mechanism relies on a community for setting up backups.

ZFS replication is not enough special care must be given for applications to be in their correct state to prevent data corruption.


There's just some stuff that to have to host yourself. In my case, it's my garage door. Instead of having to be there to press the button on my garage door remote, I hooked it into a microcontroller and can now control it over the Internet. I then expose it via a tiny PHP script and Tailscale, and now, not only can I let people into my garage remotely, but they can let themselves in with their password. An expensive business feature for an apartment complex if I were to make a product out of it, but I built it myself and self-host.


I feel the same way. The way companies have abused the privacy of the public is awful and I am in the position to run my own services but its not something most people can or should do.

I think docker has made this a lot easier than it was and the new NAS operating systems making deploying common popular containers really easy so its more accessible than it once was.


I wish there was someone I could trust to host for me. I use fastmail for email after giving up self hosting 15 years ago. I like that they take care of applying security updates and everything has just worked. They are also big enough that everybody accepts email from them so I don't end up in automatic spam land. Unfortunately they do email well, but they don't do a lot of other services I'd like - backup all my pictures as I take them for example.

Google wants me to use them, but they have earn my lack of trust - between deprecating services that look useful, the algorithm locking a few people out with no way to get back in, random changes that make useful workflows break I'm not interested.


What do you want to have hosted for you?

Feel free to contact me for a totally not shady hosting :)


Something good for backing up my family pictures that I may or may not want to share with other family members or friends (but for sure don't want to share with random people).

Not shady is important. As is reason to believe that you are contributing to make the software you host better (as opposed to mooching off free software)


Maybe Nextcloud with its E2EE can do that, but I am not sure if it works to share the files with other people while protecting the files from the host admins.


I think the web needs a bit of a redesign so that all of the responsibilities associated with hosting don't fall to a single entity. Responsibility and power go together, and power corrupts.

I ought to be able to choose a person that I trust to not lose or leak data, a different person to curate code for use on my device, and yet another person for being authoritative about the problem ___domain.

If I later decide that Jimbo has bad taste in client side code, I shouldn't have to also abandon Mary's excellent data handling track record. Yet somehow we've found our way to a corner of the design space where each entity that carries any of these burdens also carries the others.


How is this not already achievable today? You’re free to use Cloudflare to cache, AWS for your web servers and GCP for your DB. It’s just not as practical to do this from an IAM standpoint and cloud providers are offering incentives to get more of your hosting business.


Not really, I used ten or fifteen web apps today and I was totally at the mercy of whoever designed them re: who had access to the data I generated, re: which front end I used, etc.

It's the user's trust preference that I want to matter, not the app developer's.

My NAS appliance has plenty of storage available, I want to select it as my storage backend so that if the internet goes down everything that doesn't require collaboration still works for me. And not because I've been very choosy about what code I rely on, but because that degree of composability is built into the protocols.


I have created Easypanel [1] to make it easier for developers to self-host applications. However, non-developer people started to use it for deploying open-source apps. Our "templates" [2] list is now at 200+ apps. I would say 50% use it to power their "homelab" and 50% are developers who use it for their apps.

https://easypanel.io https://easypanel.io/templates


backups are a paid feature? this feels one step down from the SSO tax.


I see it differently.

I am paying for hosting, I would have to spend time to make backups on my own, now someone with experience is doing backups for me.

It is just like I can change oil in my car on my own if I go to the shop they are not going to do it for free because I can do it on my own.

"SSO tax" is also because someone has to spend time to set it up and maintain for specific company. We slowly get to OAuth2 everywhere and Azure Active Directory/Other providers where it won't be a hassle but still bunch of big companies keep on their outdated SAML services thinking it is secure and they would like you do do their job.


Some features ought to be essential. Being able to follow best-practices security and having backups ought to be included.


Yeah included in price - if you don't want to pay for backups you can pick $0 plan.

I don't see how it is wrong not to include backups in $0 plan. If you open "Easy panel" pricing page.


Why should anyone use your free plan without backups? Your service is nothing but liability and vendor lockin. That mentality is why won't even pay for your service. By making backups of premium tier you effectively hold free-tier data hostage through liability.

Once the user becomes invested in the free tier it's impossible to migrate the data out of your service or self architect backing up that data without significant risk with a complicated stack. The whole entire point is to have secure and reliable experience. You need to restructure the free tier to maintain profitability but include backups. Heck you could even limit the free plan with 2 backups


Well you answer yourself - they should not use it for free, they should pay me.

Do you go to a baker and tell him that you want free bread? You know baker cannot even give you free bread and even if it is old and no one would buy it because of regulations...

Free plan is just that you can see what are the options and that you can try it out if it fits your needs. If you want to rely on the service you should pay.

Sorry but by EU regulations where I reside you can always take your data away. You also can take it away each month making essentially what is backup. If I make backups for you, pay.


This makes the free plan less then useless, it makes it a footgun. I'd rather a service offer me a time-limited trial and then cut me off rather than play with my data like that. Someone who's less experienced might disagree and lose their data instead, and isn't that a bad look for everyone involved if it happens.

If you can't afford to offer a viable free plan, don't? If I go to a baker and they offer me free breadcrumbs, they should at least not be mouldy.


Part of evaluating your service is evaluating backup and restore. If that's not robust or reliable, then your service isn't valuable.

If I was paying you to store my backups on a your off-site server. Yes I could see your logic. However, backing in restoring up to my own local hardware incurs zero cost to you.

I'm not above paying for a service the values the end user and their data.


Are the "security updates" really required?

Let's say you are running Ubuntu 20 with nginx simply hosting some static websites, and you let it run for 5 years without any updates.

Are there vulnerabilities so big found with the OS or very popular software (like nginx) that they could compromise your server and give root access?


This post resonates with me and briefly acknowledges the thing that scares me the most about self hosting personal stuff for myself and loved ones: the bus factor. I haven't heard many self-hosting proponents talk about their strategy to mitigate the bus factor. I really want to self-host, but it seems like such a headache and a risk.


I think the only way is aggressively simplifying your solution, documenting the hell out of it (playbooks for rebooting, exporting data, etc), and leaving a notebook of passwords in a safety deposit box/safe.


I want to self-host all of my data such as calendar, contacts, photos and more but I just haven’t dedicated the time yet


Here you go: https://www.pastery.net/xtydav/

apt-get install postgresql, connect it, and you're done.


>, and you're done.

It doesn't seem that simple. When I researched Nextcloud in the past, I avoided it because of warnings like the ones in this thread: https://news.ycombinator.com/item?id=25481465

Ctrl+F search that thread for "failure".

If Nextcloud has solved whatever issues were happening in 2020, it still doesn't necessarily instill confidence because one can remain skeptical and assume there are new issues still happening in 2023. E.g. https://github.com/nextcloud/server/issues

It's going to take some time to wade through all those Github issues to determine if there are any showstoppers that would affect one's installation. This doesn't look like a low-maintenance solution. The gp's wording of "dedicated the time" seems very relevant. Copy&paste of some YAML doesn't really address the work involved.


Well, personally, I've been running it for years and it's never had as much as a hiccup, but YMMV.


For photos I recommend Immich (https://immich.app/) - recently switched from nextcloud to immich and it's ridiculous that this is free and opensource software - it feels so premium.


The biggest reason for me is costs - for personal use, cloud/SaaS pricing is way to expensive.

The second is having to read and learn provider specific documentation is a waste of time (ie deploying on fly/supabase/heroku/netlify, which all have their own cli tools and their own config syntax)


True for me too. The costs for cloud services _seem_ like they’re higher to get started _and_ I worry about the cost runs you hear horror stories about.


I’m writing an app that relies on four different servers.

I’ve written 3 of them.

We’re unlikely to self-host, but we’ll almost certainly be doing some kind of cloud service for them.

Thankfully, the scale is minuscule, compared to what a lot of folks, hereabouts, are used to.


If the mods on r/selfhosted could read, they would be very angry at this.


This is my dream for the blockchain: a massive global computer that no one controls. I can run my own services on it, using cryptography to maintain privacy when necessary, and I don’t need to worry about all the annoyances of self hosting. Everything will “just work” in perpetuity.


No such thing.

First, blockchains are terribly limited capability-wise. You'd be much better off with a raspberry pi.

Second, there's no such thing as "no one controls". There's always control. Somebody is at the top of every blockchain in existence, and their interest probably doesn't align with your.

Eg, Ethereum being expensive is a problem for the users, but the people who get paid the fee love it, so there's no reason for them to be interested in decreasing costs.


This is like the “horses are faster than horseless carriages” argument.


How so?


> First, blockchains are terribly limited capability-wise. You'd be much better off with a raspberry pi.

Blockchain = horseless carriage

Raspberry Pi = horse

A Raspberry Pi has a number of disadvantages compared to a blockchain. It needs power. It needs an internet connection. It degrades over time. The advantage of the Raspberry Pi is, as you mention, the clock speed (so to speak) is faster / cheaper. But that doesn’t mean it will always be so. Someday, I believe blockchains will be more efficient and have fewer downsides than a Raspberry Pi.


Um, what do you think a blockchain runs on? The hardware is still there, the only difference is in what software is used on top.

Blockchains need power and internet, and run on hardware that needs maintenance and degrades over time.

> Someday, I believe blockchains will be more efficient and have fewer downsides than a Raspberry Pi.

A blockchain can never be more efficient than a computer because:

1. It runs on top of computers, and does extra stuff. Extra stuff inherently has costs.

2. Blockchains are based on very low trust systems and use trickery to overcome it. This has enormous costs.


I didn’t a blockchain would be more efficient than a computer. I said it would be more efficient than a Raspberry Pi. And yes, of course a blockchain needs electricity, but the cost and burden of running the blockchain is covered by transaction fees, which you only pay when you’re actively engaging with your app, which is different than the Raspberry Pi where you need to pay when your app is idle.


It seems impossible that a blockchain would provide an useful amount of service when compared to a Raspberry Pi.

A smart contract is more or less the same idea as an AWS Lambda. But in AWS, you trust AWS to compute on your behalf. In a blockchain, you don't. Trust is replaced by enormous amounts of redundancy where every node recomputes everything to make sure everything was done right. So there's no scaling. The capacity of a single node and a million nodes is the same.

Right now, the cost of storing 1KB of data on ETH seems to be about $17. At the same price you can have a 128GB SD card, which is 128 million times more storage. A Pi will cost you $35, and $5/year in electricity if running 24/7 at full power.

Blockchains are ridiculously non-competitive.


Again, it’s not about the raw cost, but about not needing to worry about the cost and having things run “forever”. Even if you use AWS, you still need to pay your bill every month.

I don’t really understand your argument, tbh. Like what if you have an app that only needs to serve 1 KB of data. From your own calculation, it would be much more cost effective and operationally effective to use a blockchain. But you seem to be saying that doesn’t make sense because you could serve 1 KB from a more expensive thing with over provisioned storage.


> Again, it’s not about the raw cost, but about not needing to worry about the cost and having things run “forever”. Even if you use AWS, you still need to pay your bill every month.

It's not free though, you're hoping that the system is kept up by other people paying for it.

And in general blockchains are very obtuse to actually interact with. Pretty much everyone is using a centralized, traditional interface to it. Which means you might as well do it the old boring way.

> I don’t really understand your argument, tbh. Like what if you have an app that only needs to serve 1 KB of data.From your own calculation, it would be much more cost effective and operationally effective to use a blockchain.

If you need to serve 1KB of data, there's lots of places that'll do it for free. Github, pastebin, random forums, etc.

The point wasn't that I want to store 1KB. The point is that even a insignificant amount of data, the sort that is a rounding error on modern hardware, is already quite expensive to store.

> But you seem to be saying that doesn’t make sense because you could serve 1 KB from a more expensive thing with over provisioned storage.

I'm addressing your "I said it would be more efficient than a Raspberry Pi." by making some quick calculations to show that the blockchain is not just behind a Pi, but literally millions of times worse. It's not only not competitive, it's not anywhere near the same ballpark even.


> a massive global computer that no one controls

who is paying for this?


I know it's meant as a rhetorical question, but I think it deserves an answer for everyone around here who still doesn't get it: you and I, and every other person on Earth, no matter whether they are a blockchain enthusiast, or actively interested in its demise.

Proof of waste is a colossal externalised cost; you think you're trading "your" electricity and dollars for "your" imaginary money; but the fact is, you're wasting my planet. Cryptocurrencies have already caused enormous harm, and even as the fad is waning, it couldn't die soon enough.


Might want to read about proof of stake.


Ends do not justify the means. PoS is blood stained.


How so?


How did you build the stake?


You buy it?


See my original comment.


And if a particular blockchain only ever had proof of stake? Or is your argument that all money has immoral origins?


Sure, name one.


Solana


Brilliant, now consider the context of TFA. Is it any good as a platform for self-hosting applications for personal use?

From https://en.wikipedia.org/wiki/Solana_(blockchain_platform)

> Solana was launched in 2020 by Solana Labs [...].

It was founded by a single, corporate entity, that continues to govern the project's direction...

> The blockchain has experienced several major outages, was subjected to a hack, and a class action lawsuit was filed against the platform.

...was subject to major disturbances...

> Solana's total market cap was US$55 billion in January 2022. However, by the end of 2022, this had fallen to around $3 billion following the bankruptcy of FTX.

...and meanwhile lost 95% of its value following the bankruptcy of an unrelated party.

Read the entire comment thread under the original article. Try to comprehend the actual issues that people are struggling with. Explain how is Solana going to make self-hosting less painful. Why do we choose to self-host? Privacy, simplicity, ownership/sovereignty, cost? What issues are we struggling with, that better technology could ease? Maintenance burden, proper backups? So what value does it bring to self-hosters?

Can I run my calendar and contacts sync on top of Solana? What happens if my private key gets leaked - everyone gets to know when exactly I went to the cinema with my partner? Privacy - bust.

What about email? I need to point the MX records at something the rest of the Internet knows how to connect to - what's the point of a decentralised datastore if I still have to mess around with hosting a publicly reachable node? Simplicity - bust.

What happens when the network experiences another outage, lawsuit, or market crash? Ownership and sovereignty - bust.

What about storage - I currently need ca 6TB for personal backups - am I going to upload it all to the blockchain? What's the cost of storing 6TB on-chain, just once? Cost - bust.

What's the story with maintenance? Assuming there's some magical fairy dust that makes most of the fundamental problems go away, is running your personal infra on top of a cryptocurrency-backed blockchain going to make you sleep better at night? What's stopping someone from mounting a 50%+1 attack the next time the coin loses 95% of its value overnight?

Is there a single thing that I use a home server / VPS for today, that Solana can do better?


The users. Each transaction in a blockchain has associated fees. Those performing the transactions have to pay the fees. It’s not dissimilar to how you pay for EC2 instance time or AWS lambda invocations.


https://internetcomputer.org/ seems to promise that. but i still can't wrap my head around how that is supposed to work. does it turn every participating computer into a remotely accessible CPU so your code will run on random machines somewhere in the world?


You should try writing a smart contract and it will become clear how it works.


as i understand it, a smart contract is stored on the chain and is executed on your computer whenever you make a transaction that triggers the contract, or something like that. so the whole thing happens on my or your computer and not somewhere else.

when i host a website and a user access that website, then it causes the host server to compute whatever the user asks for.

if i put my "website" on the chain as an app (a dapp?) then i guess it is downloaded to the users computer and executed there.

how can the blockchain help me access computing resources on third party services?

i suppose that devices could register themselves on the blockchain as having computing resources available, and an application can then access those resources.

am i missing anything?


It's much, much worse than that.

Smart contracts execute on the blockchain, which means they execute on thousands or millions of computers in lockstep. Extra hardware doesn't add capability, it adds redundancy.

Since the blockchain is public there are things you can get out of it for free by looking at the public state, but actually performing a computation that has some sort of world-wide visible effect involves executing that code on every node at once, so it's amazingly costly.


a cursory search suggests that this is not the case and that smart contracts only run on a single node. the result of the smart contract that creates another blockchain entry causes the calculation of the new entry on every node that is mining.

internet computer should be using the blockchain to manage computing resources that themselves also don't run on the blockchain.


It amounts to the same thing, really.

Blockchains are structured so that you don't need to trust, you can verify. You follow the longest valid chain, and by "valid" I mean that your node verifies every block and checks everything complies with the rules.

For running code, it's the same thing. You can trust that your code won't be subverted by some rogue node because everyone else is double-checking those computations, and if a node breaks the rules its work will be rejected.

> internet computer should be using the blockchain to manage computing resources that themselves also don't run on the blockchain.

Not sure what's this supposed to mean. In general though blockchains are only effective within themselves. Any interaction with the outside hits the oracle problem -- you're trusting some outside source not to lie, so the blockchain doesn't really add anything.


the blockchain doesn't really add anything

well, that is what i am trying to understand. for now i don't think that is right. it doesn't make sense to put everything directly on the chain. that would be to costly. not every computation needs to be trusted. i can still implement some checks to verify that computations are ok without the computation needing to run on the chain itself. but the chain can provide a mechanism to manage the resources (who gets to use how much and where, and who pays for it) which would be more difficult to do without a chain.


No, the chain only has authority over itself.

It's the same issue as with NFTs. Yes, you can store on the blockchain stuff like "User 0xabcdef owns the Godslayer of Hitpoints sword". And a game can look at the blockchain, and make use of that record.

But absolutely nothing whatsoever stops it from deciding to stop caring, or replacing your fancy sword with a rubber duck, or deciding that screw this particular user, they're banned regardless of what the chain says.

Also even to the extent this stuff works it's not nice to use. If the blockchain has you have some sort of resource (money, quota, etc), then if somebody ever gets their hands on your account they're now the rightful owner as far as the blockchain is concerned. So if per the chain you're owed say, 1 year worth of GPU time or something, that automatically becomes a self-paying bounty. Everyone can see it, and has an incentive to try and hack you.


how could data off the chain be replaced if it is signed? sure it can be lost, so that's a different issue. if the chain itself is trusted and it says that, say a movie that has this signature is owned by this studio then noone can replace the movie or even steal it because they need the chain to access it. if the studio doesn't keep a copy of their movie and they loose it that's their own fault. same with NFTs. i think the idea is stupid, but i don't see the risks you are suggesting.

to your last point, that's only an issue if you pay in advance, not if you pay after use. and stealing money on the blockchain is nothing new. that's of course a problem, but one that affects the whole ecosystem, so it will eventually have to be fixed. adding computing resources into the mix doesn't change anything.

i am not really interested in discussing why something should or should not be done. because i do not know enough about how it actually works before i can argue about that.

so i really first want to understand how it works so i can make up my own mind on whether it is a good idea or not.


> how could data off the chain be replaced if it is signed?

It's very simple. You ignore the chain. The chain isn't magic, if it has any authority it's only because something else decides to consult the chain and to care about what can be found there.

Any actual usefulness of the blockchain is limited to things that reside exclusively in the blockchain.

Anything external, like some service that determines ownership of movies (not sure what scheme you're envisioning exactly) can arbitrarily decide to stop caring about the chain at any time, or to even selectively stop caring about parts of it.


of course you can ignore the chain, but i already said that. if you have assets outside the chain you need to watch them yourself. but if the key to open that asset is on the chain, then even if you get a copy of the asset you can't do anything with it without using the chain. you can't replace it either because that would change the signature, and the owner can just reupload his original copy with the correct signature.


> if you have assets outside the chain you need to watch them yourself. but if the key to open that asset is on the chain

How is that supposed to work? Blockchains are public, they don't keep secrets.

> then even if you get a copy of the asset you can't do anything with it without using the chain.

Sure can. Take the key off the public chain, put it on pastebin. Done.

> you can't replace it either because that would change the signature,

Sure can. Yeah, signatures may change. But that only matters to the extent that people do. The chain has no effect.

> and the owner can just reupload his original copy with the correct signature.

What owner? I still have no idea what you're envisioning here. What's this for? What purpose does it serve? How does it do that technically?


Blockchains are public, they don't keep secrets

you can store encrypted data on the chain. if i sell you a movie, i can give you the encrypted movie and use your public key to store an encrypted message with the key to open the movie. that way i can use a smart contract on the chain to track if you opened the movie.

Yeah, signatures may change. But that only matters to the extent that people do. The chain has no effect.

if the signature is on the chain and the chain is used to verify your ownership claim, then how exactly does the chain have no effect?

What owner?

the owner of the digital asset. an NFT for example, let's ignore for a moment that NFTs are stupid. or a movie. the purpose that it serves is to track the ownership of an asset. sure, you can make a copy of it. but you don't own that copy. if i discover that you made a copy, i can charge you with theft (see piracy) because i'll use the chain as proof of ownership and i can prove that you don't own your copy.

i am not saying that we want this. i am not a fan of blockchains. but to my understanding this is how they can be used.


> you can store encrypted data on the chain. if i sell you a movie, i can give you the encrypted movie and use your public key to store an encrypted message with the key to open the movie. that way i can use a smart contract on the chain to track if you opened the movie.

That's not going to work. Any encrypted material in the blockchain is public. Any algorithm in it is also public. So I can just execute the code by hand, skipping any tracking code.

> if the signature is on the chain and the chain is used to verify your ownership claim, then how exactly does the chain have no effect?

This is all contingent on everyone caring about what the chain says. If I get hold of the movie and the key, the chain may say you own the movie, but I don't care. Now what?

> the purpose that it serves is to track the ownership of an asset. sure, you can make a copy of it. but you don't own that copy.

Movies are sold by the millions. Is the idea here making a million different watermarked copies of any given movie? If so, the watermark is the important bit, so what do you want the blockchain for? Just point to the court that John Smith has a movie that was tagged as having been sold to Joe Bloggs, and thus isn't his.

> if i discover that you made a copy, i can charge you with theft (see piracy) because i'll use the chain as proof of ownership and i can prove that you don't own your copy.

1. I bet it's going to be fun to explain all the details of the blockchain to the court and to convince them that this is indeed a tight proof of ownership.

2. If the blockchain is the ultimate arbiter of who owns what, then as soon as I manage to hack you, I can steal all your stuff, become its rightful owner in the view of both the blockchain and the law, and then sue you.


i don't want the blockchain for anything. i want to understand what it is capable of. i don't need arguments that explain that i can do the same without the blockchain. i already know that. i am trying to learn what the blockchain can do, regardless if it is useful or not. your answers are not helping.

Any encrypted material in the blockchain is public. Any algorithm in it is also public. So I can just execute the code by hand, skipping any tracking code.

i do not believe this is true. otherwise smart contracts would not work. if i store something on the chain, and accessing it triggers a smart contract, then you should not be able to bypass the contract. i don't know how that works, but if it didn't work then smart contracts would not be enforceable. if that is the case i'd really like to see evidence of that.

Is the idea here making a million different watermarked copies of any given movie?

a watermark only tracks ownership, but it doesn't call home to count how many times the movie has been watched. it's not something i want. but it is something the blockchain would enable. and again, i am not interested in learning how to solve that problem without the blockchain, but i want to learn how the blockchain would solve this problem, regarless of better alternatives.

If the blockchain is the ultimate arbiter of who owns what, then as soon as I manage to hack you

you would not just have to hack me, but you would have to initiate an ownership transfer on the chain. and these things already happen, millions of coins have been stolen by some mechanism that allowed the transfer of ownership on the chain. and the transfer could not be undone, at least not without resetting the chain. so clearly this is a weakness of the whole blockchain concept. and something the developers will need to address. the question here is, can it be addressed or is the whole concept so flawed that this can't be fixed?


> i do not believe this is true. otherwise smart contracts would not work.

Smart contracts work because they keep everything on the chain. You're proposing a weird hybrid model, like "let's tie the legitimate possession of a real-life movie to the state of an item in World of Warcraft".

Things work if you're within WoW fully, or dealing with a physical DVD fully. Trying to combine both into a single system is where things get weird.

> if i store something on the chain, and accessing it triggers a smart contract, then you should not be able to bypass the contract.

This works so long everything you care about is on the chain. But your movie isn't.

> i don't know how that works, but if it didn't work then smart contracts would not be enforceable. if that is the case i'd really like to see evidence of that.

Try to think of a mechanism that would force me to have a WoW account, and to register my interactions within WoW every time I wanted to watch say, Guardians of the Galaxy.

> a watermark only tracks ownership, but it doesn't call home to count how many times the movie has been watched.

I think you're not getting that the blockchain and the movie exist separately. For the blockchain to refer to my particular copy of the movie, my copy has to be unique in some way. It has to have an unique SHA256. I'm saying that if my copy is already unique, it's already identifiable as mine, so the blockchain doesn't really add much to solving anything here.

Your legal issues are solvable by just "Look, here's a record that I sold this movie to Bob Smith and his copy has SHA256 ABCDEF0123...., and look, here it is on Pirate Bay"

> the question here is, can it be addressed or is the whole concept so flawed that this can't be fixed?

IMO, it's unfixable. The whole point of blockchains is the lack of a central authority. Your identity on one starts and ends in the possession of a public/private key pair. The second somebody gets their hands on that, as far as the chain is concerned, they're you. At that point they transfer your stuff to their account and you're screwed. There's nobody to appeal to, because the very point of the system is that nobody can override that mechanism.


if the blockchain can protect access to anything on the chain, should't it be enough to store the key to encrypted data on the chain, without the data itself?

sure, once you decrypt it, you can potentially copy it, but that's not a failure of the blockchain, because the same is true without it. the blockchain has other benefits here.

your movie becomes unique by encrypting it with with your public key. you can probably keep a watermark in the movie too, but that only makes it harder to share the decrypted movie, not impossible.


> if the blockchain can protect access to anything on the chain, should't it be enough to store the key to encrypted data on the chain, without the data itself?

No, I mean, think of an open source game. You have both the code and the data. Say it's a fighting game.

The game only has effective power if your interest lies entirely within the game. If what you want is to beat a friend in a match then you must act within the game's rules -- use the controller, use the provided moves, and win by applying your skill.

But if all you want to do is to watch the ending cinematic, then you can escape the game's rules. You can just read the source, find the right file, and decode the ending cinematic. You can skip the requirement to finish the game or to play in hard mode. You have the code and the data, so you can break the rules.

Blockchain stuff is like that. So long what you want is within the blockchain and nowhere else, the blockchain has power.

You can store encrypted data on the chain. But since the data is public and the code is public, you can always bypass the chain. You can just take the secret, feed it yourself into OpenSSL, apply the decryption key, and bypass whatever stats accounting/etc might be part of the smart contract.

> sure, once you decrypt it, you can potentially copy it, but that's not a failure of the blockchain, because the same is true without it. the blockchain has other benefits here.

It doesn't have any, it only has weaknesses. A standard webserver would be stronger, because a webserver can work with secrets you can't access. A blockchain is by definition open code, and works with open data. It can't keep any secrets from you, or to reach into any private storage.



Okay? I don't think you're actually reading those. See the conclusion:

"Private data in a smart contract is not private as such since we are dealing with public blockchains"

Yeah, "private" exists as a language construct, in the way it does in C++. No, it's not actually private from the world though, and so anything you put there is something I can get my hands on trivially.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: