Hacker News new | past | comments | ask | show | jobs | submit | pwg's comments login

What happens with that "absolute position relative to some arbitrary 0,0 picked by the mouse" when the user picks the mouse up off the table/pad/etc. and repositions it (i.e., they hit the edge of the pad and now "re-center" to continue moving left (or right) on screen). The mouse loses its 0,0 point reference as soon as it is picked up.

It could send a "reset 0,0" packet of some form in this case, but now reception of that packet becomes critical to continuing to properly communicate motion to the attached computer.


That's not how mouse input works though, right? If I move my mouse cursor to 10,10, and then pick up the mouse and set it down somewhere else, it's still at coordinates 10,10. You don't need the mouse's physical absolute position, but just the cursor position (which is the sum of all the relative movements)

> It could send a "reset 0,0" packet of some form in this case, but now reception of that packet becomes critical to continuing to properly communicate motion to the attached computer.

And those "how I would have designed a wireless mouse protocol" guys are back at the square one.


Ken, there is a misspelling in footnote 8.

This: "Two support"

Was likely meant to be: "To support".


Thanks! I lose proofreading enthusiasm by the time I reach the footnotes :-)

Most likely what happened is some MBA ran a short A/B test of smaller vs. bigger video thumbnails, and the A/B results showed more "engagement" with the larger size thumbs, and so, of course, to meet his/her performance goals, the MBA had the page altered to the version that showed "more engagement".

I think it also helps them figure out which videos keep people on YouTube longer. If I scroll to a section of the page that has 6 videos, and I stare at them for 10 seconds, then scroll down, they'll know that one or two of those videos must have been somewhat interesting. But if I stare at 6 videos, then scroll away 2 seconds later, it knows that nothing in that batch was worthwhile.

The fewer videos they have in focus at a time, the more accurate their algorithms can be.


> “Chrome today represents 17 years of collaboration between the Chrome people” ... “Trying to disentangle that is unprecedented.”

Hmm, sounds very suspiciously similar to MS's argument from twenty some years ago for why they couldn't "disentangle" Internet Explorer from Windows the OS during their monopoly trial.


From the addition:

> (EXIF stripped via screenshotting)

Just a note, it is not necessary to "screenshot" to remove EXIF data. There are numerous tools that allow editing/removal of EXIF data (e.g., exiv2: https://exiv2.org/, exiftool: https://exiftool.org/, or even jpegtran with the "-copy none" option https://linux.die.net/man/1/jpegtran).

Using a screenshot to strip EXIF produces a reduced quality image (scaled to screen size, re-encoded from that reduced screen size). Just directly removing the EXIF data does not change the original camera captured pixels.


I would like to point out that there is an interesting reason why people will go for the screenshot. They know it works. They do not have to worry about residual metadata still somehow being attached to a file. If you do not have complete confidence in the technical understanding of file metadata you can not be certain whatever tool you used worked.

True, but on Mac, a phone, and Windows I can take a screenshot and paste it into my destination app in a couple seconds with a few keystrokes. Thats why screenshotting is the go-to when you don’t mind cropping the target a little.

Little bit less convenient to use on a phone though - and I like that screenshotting should be a more obvious trick to people who don't have a deeper understanding of how EXIF metadata is stored in photo files.

With ___location services on, I would think that a screenshot on a phone would record the ___location of the phone during a screenshot.

It would be best to use a tool to strip exif.

I could also see a screenshot tool on an OS adding extra exif data, both from the original and additional, like the URL, OS and logged in user. Just like print to pdf does when you print, the author contains the logged in user, amongst other things.

It is fine for a test, but if someone is using it for opsec, it is lemon juice.


I built a tool for testing that a while ago - try opening a screenshot from an iPhone in it, you won't see any EXIF ___location data: https://tools.simonwillison.net/exif

Here's the output for the Buenos Aires screenshot image from my post: https://gist.github.com/simonw/1055f2198edd87de1b023bb09691e...


That is cool, but we cant be guaranteed that will always be the case, nor could we make a statement about all phones, it would be a phone by phone basis. Esp on Android where someone could have an alternative screenshot application.

Depending on your threat model, I'd argue that it would be impossible to prove that metadata is not included within the image itself (alpha channel, noise, pushed pixels, colorspace skew, etc).

I'd be interested in stego techniques that can survive image reduction and denoising.


Take a photo of the image displayed on your laptop screen with your phone. Ultimate EXIF removal!

Screen dust and smudges now form a fingerprint to cross correlate images.

Ffshare on Android is a one second step to remove exif data

Think: "stalker".

If the person is already a stalker you'd think they'd already know this, no? There's that anecdotal stuff in japan where a vlogger was located by her "fans" from a reflexion of their home bus station or something. The weird people will do weird stuff regardless of technology, IMO.

And the governments are already doing this for decades at least, so ... I think the tech could be a net benefit, as with many other technologies that have matured.


> weird people will do weird stuff regardless of technology

If I were someone's only stalker, I'd be absolutely hopeless at finding their ___location from images. I'm really bad at it if I don't know the ___location first hand

But now, suddenly with AI I'm close to an expert. The accessibility of just uploading an image to ChatGPT means everyone has an easy way of abusing it, not just a small percentage of the population


So I guess the evil we're worried about is stalkers who are bad at guessing locations, bad enough with tech that they don't know about geoguessr websites and subreddits, but good enough with tech to use LLMs?

Screenshot / save photo -> add to chatgpt chat -> "where is this taken?"

There couldn't possibly be a lower barrier to doing that

ChatGPT is also currently the #1 free app on ios and android in the US. Hardly a niche tool only tech people know about, compared to the 129k people on that subreddit


Given that ChatGPT supposedly has "500 million weekly actives" (recent Sam Altman quote) I think what you're describing there is a pretty likely persona.

I suspect that those folks who answer survey questions of "would you pay more for made in the USA" with "yes" are thinking (if they are thinking at all) of paying $2 to $3 more on a $100 item, not paying $110 more on a $100 item.

None of the surveys are ever crafted to ask: "How much more would you pay for a $100 item for 'made in the USA'?".


It is largely pointless, in general, to survey people about how much they would pay for things. Taking such answers seriously has led a lot of companies to ruin. The whole point of pricing is that no one knows how much something is worth until it is actually selling (or not).

Yeah isn’t this like the number one lesson for startups? People will say lots of things when there’s no money or reputations on the line.

Quality is also an undefined variable, because people may pay 10% more for an American made product that is of comparable quality, but they may also be willing to pay 110% more if the Asian counterpart is poor quality.

When you’re using the same exact photos, there’s no discernible quality difference.


Ironically, perhaps, but in 2025 I'd argue the Asian counterpart would probably be of higher quality, at least in the initial transition back to US manufacturing. AND it would be cheaper.

The survey already used percentages. As for not thinking - it would seem to me worrying about the effects of one's purchases on the local economy, and the knock-on effects this has on sovereignty and politics, takes more thought than just short-sightedly picking the cheaper option no matter what.

Most people don't understand percentages.

Maybe: Most American people don't understand percentages?

Most people, no matter the place.

A lot if people are surprised when they discover that 100 + 50% - 59% is not 100.

I do not like percentages myself, I would prefer we say 0.8 of something because the option becomes a simple multiplication and it is easier not to make mistakes


Most people. They teach doctors to use frequency representations (i.e. 12 of 100) for a reason.

> Nearly half (48%) say they’d be willing to pay around 10–20% more.

$110-120 for a $100 item, no?


I believe they meant an additional $110, which would be a 110% markup.

Why do you believe this?

https://news.ycombinator.com/item?id=43787992

> of paying $2 to $3 more on a $100 item, not paying $110 more on a $100 item

$110 more on a $100 item would be $210. I have no idea where pwg got the “$110 more”, though. Seems the in-context comparison would be “$85 more”.


Probably because that’s approximately in line with the article.

> much less because the OS in ROM would write itself into RAM on startup and take about 20KB away.

RAM shadowing of the ROM did not exist in the Atari's (at least not in the original 400/800 models). The ROM's simply were physically connected to actually "be" the top 16KB of the 6502's 64k max address space. The CPU executed the ROM code by directly reading the code from the ROM chips.

Which is also the reason the original 400/800 models were limited to 48k max RAM. 16k of the address space was already used by the ROMs.


The original Atari 400/800 included BASIC on a ROM cartridge.

To use BASIC, you plugged the BASIC cartridge into the system and powered up.

To boot something else (games...., from either cassette or disk) you first removed the cartridge, then powered up.

With the XE series, BASIC was built in to the console, so the "magic keys" were needed to tell the hardware to virtually "unplug" the BASIC ROM before it tried booting from any connected devices.


On the Atari's you could also run 6502 binaries from inside Atari BASIC. The Atari ROM OS explicitly reserved page 6 of the memory map for "user use" and Atari Basic followed suit. There were (IIRC) also a tiny number of page 0 bytes reserved for 'user use' as well.

So, as long as your entire binary fit into 256 bytes, you could run it from inside BASIC. In fact, you could even store it as a BASIC program, the BASIC just needed to "POKE" the binary into page 6, and then you could jump to it.

To do anything larger than 256 bytes required you to dig into the inner workings of where BASIC stored code itself and avoid overwriting any of BASIC's data, or having it overwrite any of your code. Not impossible to do, but did require a lot of undocumented (or not so well documented) work.


You might have been able to store the opcodes in strings, letting BASIC put them in memory somewhere and then getting the address.

A cool trick to move your player/missile graphics vertically in BASIC was to store the sprites in strings, point the sprite's starting memory to the address of the string, and then use string-copying routines in BASIC to move the sprite up & down (since they only had a horizontal-position register; vertically they were as tall as the screen, so you had to blit them to get vertical movement).


Yes, I learned that trick out-of the blue book, de re Atari I believe. Hard to remember.

I also used that trick to scroll one of the lower resolution graphics screns for a brick out type game that would inch whatever was left toward the player.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: