Hacker News new | past | comments | ask | show | jobs | submit | recursive's comments login

How do you feel about the elevator operation industry?

comparing an elevator to a semi-truck and trailer as the same is disingenuous at best, and pointless at worst. as of now, i'm hard pressed to find a point to your question

OK I'll explain it to you. Automation changes the labor landscape as it has done since the industrial revolution. This is nothing new.

okay, but automating an elevator is something someone from python bootcamp could do. automating a semi-truck is not. so again, to compare the two is ludicrous. if you honestly feel like they are the same, you are someone I would semi-politely make my excuse to leave the conversation at a party. hey, look, there's someone over there that i haven't seen and would really like to talk to them rather than you. enjoy the party

It is directionally similar. We've progressed, so we're automating more.

Enjoy the party too. I'll go have a drink with recursive.


I'm not.

Replacement for what use case? The whole point is to eliminate the behavior, not provide another feature that has the same problems. What does failure mean? It's a problem for ad networks, not for regular humans.

The use case of not having to log in to system A which is being embedded within system B because you already logged in to system A? Without needing to introduce a third party SSO C? That's pretty "regular human", even if it's "medium sized corporation" instead of "Joe Regular" (but even Joe likes it if he doesn't have to log into the comment box on every site that uses THE_COMMENT_SYSTEM_HE_LIKES.)

This exists already. You can have cookies at higher level of the same ___domain. So foo.example.com and bar.example.com can share cookies at example.com. You can also use CORS to interact with a truly third party site. None of these require third party cookies.

A use case this doesn't address is embedding across two completely different domains, which is pretty common in the education space with LMS platforms like Canvas (https://www.instructure.com/canvas) embedding other tools for things like quizzes, textbooks, or grading. I ended up in a Chrome trial that disabled third-party cookies which broke a lot of these embeds because they can no longer set identity cookies that they rely on from within their iframe.

As nwalters also points out, this isn't the same at all. System A and System A' both from Source Α are not the same as System A (Source Α) and System B (Source Β).

Which you know, because you say "you can also use CORS to interact with a truly third party site". But now, I invite you to go the rest of the way - what if the third party site isn't Project Gutenburg but `goodreads.com/my-reading-lists`? That is, what if the information that you want to pull into System A from System B should only be available to you and not to anyone on the net?


Use OAuth2 to get system B's access token, then use authenticated server-to-server API requests to pull needed information from system B.

This multiplies the cost of the integration by at least an order of magnitude

BINGO! The issue here of course is that now instead of _two_ components (Front End A and Embed B) you now have four (the back ends must communicate and if A didn't need a back end ... well, now it does).

Now, if you meant "Use OAuth2 in the browser", that's just the original case (you can't authorize if you can't authenticate and it's the ambient authentication that's being stripped when you eliminate third party cookies).


You might not want to log in to both systems. For this specific case a link can still be used to the service that permits adding comments, that you are logged in to already, without needing to use third-party cookies.

Furthermore, cookies are not a very good way of doing logins. There are other problems with them, including of stealing them if someone else takes over the service, and of difficulty of users knowing what they are if they want to modify or delete specific cookies, and that the server must set the expiry which makes it difficult for end users to control. Methods that are controlled by the end user would be better.

Other methods of authentication can include:

- HTTP basic and digest authentication (which has some of the same problems).

- Two-factor authentication, which has many problems (especially if they are badly implemented).

- HMAC. For things that only require authentication for write access and that are idempotent, it should be safe whether or not the connection is encrypted. A proxy could spy on the operation but can only use it to do the same operation that you had just done, and cannot be used to do any other operation. However, these characteristics are not valid for all systems; e.g. it does not prevent replay and does not prevent spying on what you are doing. For uploading files to a server that are never changed or deleted, if the files are public anyways, and anyone already knows who uploaded them and when, and if nothing else is done with this authentication, then HMAC will work.

- X.509 client certificates. This requires TLS, although many servers already do anyways (although I think that for things that do not require authentication, the TLS should be optional instead). This is secure; if a server obtains a copy of your certificate they cannot use it to impersonate you. Furthermore, X.509 can be used to log in to multiple services without needing an authentication server, and a kind of 2FA is possible if the private key is passworded (and the server you are logging in to will never see the private key or the password). Also, it can be signed by a different issuer certificate that has a different key (the issuer certificate is normally someone else but could also be controlled by yourself as well if you want to); you could store the private key of the issuer certificate in a separate computer not connected to the internet (possibly in addition to being passworded), providing some additional protection if the subject certificate is compromised. There are many other benefits as well.


The use case is web sites that want to earn income with as little user overhead as possible. Targeted ads have many downsides but they do pay websites without any money at all from the user, or even having to create an account.

So the problem for regular humans is the disappearance of features that they've grown used to having without paying any money. Finding a better way to support themselves has proven remarkably difficult.


I feel like many people here wouldn't care if those websites simply stopped existing.

Certainly a lot of people would care if Facebook disappeared.

There are also a billion other ad-supported web sites, each of which make ten people happy. Not a single one of them would be widely mourned, but 5 billion people would each be saddened by one of them.


Many people would, though.

For a long time I thought pinterest was search spam that no human could possibly want to see, but then I met real people in the world who like it and intentionally visit the site. I bet there are people who like ehow and the rest, too.


The viability of their business model shouldn't be everyone's problem.

It is their problem when a feature that they like disappears.

They don't care about what happens to the business itself. But they do care about the things the business provides.

If they don't in fact care, then indeed, nothing is lost. But a lot of people will miss a lot of things. Whoever comes up with an alternative that suits the case will make a lot of people happy.


People made money on advertising before the existence of cookies and ubiquitous tracking. Nature will heal.

And people had websites before the existence of Internet advertising. Let's set our expectations higher for how much healing is needed.

The article explicitly calls out that there are valid use cases (although doesn’t enumerate them). Federated sign-on and embedded videos seem like obvious examples

First party cookies can't build a profile on you across multiple origins.

They absolutely can. They have, at minimum, your account information and your IP address. Maybe you use a burner email address and/or phone number, and maybe a VPN, but chances are you’re not cycling your VPN IP constantly so there’s going to be some overlap there. And if you do cycle your IP, 99%+ of users probably aren’t clearing session cookies when doing so, which means you’re now tracked across IP/VPN sessions. Same deal if you ever connect without a VPN - that IP is tracked too. There’s tons of ways to fingerprint without third party cookies, they just make it easier (and also easier to opt out of if they exist, just disable third party cookies; if no one has third party cookies, sites are going to start relying on more intrusive tracking methods).

You can also easily redirect from your site to some third party tracking site that returns back to your successful login page - and fail the login if the user is blocking the tracking ___domain. The user then has to choose whether to enable tracking (by not blocking the tracking ___domain) or not seeing your website at all. Yes the site might lose viewers, but if they weren’t making the site any money, that might be a valid trade off if there’s no alternative.

Not saying I agree with any of this, btw, I hate ads and tracking with a passion - I run various DNS blocking solutions, have ad blockers everywhere possible, etc. Just stating what I believe these sort of sites would and can do.


All they need to do is redirect you through a central hub after login.

On first visit:

* "Please wait while we verify that you're not a bot, for which we'll need to associate a unique identifier with your browsing session." (logged in or not)

* The validation needs to do a quick redirection to an external centralized service, because if they can already identify that you're not a bot, you save CPU cycles, and you care a lot about carbon footprint after all.

* Redirect back to the original website, passing the "proof of not-a-bot" somewhere in the URL. This is just a string.

* The website absolutely needs to load the external script `https://proof-validation.example.com/that-unique-string.js` for totally legit purposes obviously related to detecting bot behavior, "somehow".

Half-joking because I don't think this would fly. Or maybe it would, since it's currently trendy to have that PoW on first visit, and users are already used to multiple quick redirections[1] (I don't think they even pay attention to what happens in the URL bar).

But I'm sure we'd get some creative workarounds anyway.

[1]: Easy example: A post on Xitter (original ___domain) -> Shortened link (different ___domain) -> Final ___domain (another different ___domain). If the person who posted the original link also used a link shortener for tracking clicks, then that's one more redirection.


They do this all the time. There's a reason we have the term "AI slop". LLM output definitely has a reputation.

This was seemingly unintentional, but I bet there are "trap streets" already in the models that use this principle.

It's a less hard problem than the duet. If the round-trip is 38ms, you can estimate that the one-way latency is 19ms. You tell the the other client to play the audio now, and you schedule it for 19ms in the future.

That's assuming standard OS and hardware and drivers can manage latency with that degree of precision, which I have serious doubts about.

In a duet, your partner needs to hear you now and you need to hear them now. With pre-recorded audio, you can buffer into the future.


You’re right that it’s an easier problem, but it’s still trickier than it looks. Remember the point of this is to be listening together. To do that, you need to be able to communicate your reactions. And then you’re back to the 38ms (in practice it’s probably twice that). Either way, at 120bpm that’s over a bar!

If you _don’t_ have real time communication, then you don’t really need to solve this problem. But the problem is fundamentally unsolvable because the speed of light (in a vacuum) is the speed of causality and, as I say, puts a hard cap on simultaneity. This tends to be regarded as obvious at interstellar distances but it affects us at transatlantic distances too.


You're basically right, but one 4-beat bar @120bpm is 2000ms.

Also latency demands on conversation are not nearly as tight as those on music performance. See ubiquitous video conferences.


My brain ain't working, and yeah, I don't tend to notice transatlantic delays on voice and video calls.

The end?

I've used it and haven't had much success.

Loopy Pro has a cool convention that I haven't seen elsewhere for this. Drag up or down to change the knob value. While doing that, drag left or right to zoom in. That makes the up/down movement more precise.

I will look into it! Is this for mobile or desktop? I would like to see how they introduce this interaction pattern and what feedback they provide as you interact with the knob.

It's an ios app. IMO it's really good. I own exactly one apple product, and it's an iPad that only runs Loopy Pro.

Here's a section from the manual that loosely explains the concept[1]:

> Adjust a slider or dial’s value by dragging up and down, or left and right for horizontal sliders. For finer control, move your finger away from the dial.

[1]: https://loopypro.com/manual/#sliders-and-dials


Thanks! This link is really great!

My only Apple product is also an iPad, and I mostly use it to make music with Auxy Studio:)

Do you use any fun apps on Android? Currently, my favorite apps are Digitron and Nanoloop. (No affiliation, but Digitron's upgrade was gifted to me.)


The only other music or audio app I use with any regularity is Reaper on Windows. I tend to do more performance-oriented stuff, and I try to keep everything outside the computer as much as practical. I don't use any software synths. I like the constraints and UX of dialing patches into my one keyboard/drum machine. I record some, but mainly I like to play in real time and not fiddle with VSTs and plugins.

Makes sense. Hardware is not something within my reach, currently, so I stick with software. I'm sure hardware can be a lot more fun.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: