Hacker News new | past | comments | ask | show | jobs | submit | Guvante's comments login

Did you actually discredit someone or have you not properly considered your units in this response?

Commute through the Grand Central station everyday is certainly not a few hours.

And people don't tend to get a CT scan very frequently so the timeline here is massive.


In your opinion how many hours spent in Grand Central station equal the radiation received from a CT scan?


Somewhere between 7 and 700 days.

CT Scan: 10-1000 mrem

Grand Central Station: 525 mrem / yr

https://files.eric.ed.gov/fulltext/ED297952.pdf


So OP's statement is true for people who live IN the station.


It's roughly 40 min per workday over a typical year. That's a bit high but not unreasonably so.


That would amount to 10 mrem of radiation per year. I don't believe this is a realistic estimate for a CT scan though. From epa.gov [1]:

- Head CT: 2.0 mSv (200 mrem)

- Chest CT: 8.0 mSv (800 mrem)

- Abdomen CT: 10 mSv (1,000 mrem)

- Pelvis CT: 10 mSv (1,000 mrem)

So for a head CT, one would need to spend more than 13 hours per workday in the station. OP was off at least an order of magnitude.

https://www.epa.gov/radiation/frequent-questions-radiation-m...


This data is from 2006. Over 20 years, there has been substantial progress in CT radiation reduction through model-based iterative reconstruction and now ML-assisted reconstruction, aside from iterative advances in detector sensitivity and now photon-counting CT.

In clinical practice, those doses are about 2-3x what I see on the machine dose reports every day at my place of work.

In thin patients who can hold still, I've done full-cycle cardiac CT and achieved a < 1 mSv dose. We are always trying to get the dose down while still being diagnostic.

Source: Practicing radiologist.


Fair enough. That was the first number I pulled from Google, but I trust your source a good deal more.


I used the word comparable. Given they are in the same ballpark of log scale i stand vindicated in my opinion.

Also there's an apple store there. RIP all the geniuses there i suppose


Let's be real in most situations it doesn't matter. One "statement" per line is bog standard. Whether that statement is surrounded by parens or ended in a semicolon isn't impactful for reading.

LISP is only better when you add source code transformation (which is way easier with its source code looking like the transformed code). But then you introduce "everyone can write their own syntax" which is good and bad given the history of DSLs...


> One "statement" per line is bog standard

This isn't really true. Most non-Lisp languages I work in, like JS or Ruby or Python, have things like long_expression =\n long_other_expression, or long_expression\n.another_long_expression\n.another_long_expression.


And if most of your code looks like that you are making a mistake.

"Sometimes I need multiple lines" is fine, exceptions happen.

But again I ask, visually are those lines super different?

Ditto for things like for loops which have multiple statements in a line.


When writing code you are transforming it all the time. Having only expressions (mostly) comes in very handily. It is called structured editing of code.


If 23andme has an agreement with its consumers on how it will handle the data it should not matter whether they are bought that agreement should be maintained in perpetuity unless those consumers actively choose to change their agreement.

After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?

(I am not being critical of the AG here but instead pointing out how lax consumer protections have gotten that we even need to have this be a talking point)


You're right that it should not matter. That would be a great world to live in! It's not this one, though. Companies ignore these agreements all the time. Sometimes they're even caught and their wrists get slapped.

More often (I believe) we just never learn the agreements have been broken in the first place.

But it is a rule—almost approaching a law of nature—that companies facing financial distress will begin putting a price tag on private data they've promised never to sell. It's like the cartoon with the starving people in the life raft: they look at your data, and suddenly they don't see a legal agreement to protect it, they see a juicy drumstick.

> After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?

Well, opinions differ on that one too!


> After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?

Both 23andme and Dropbox's privacy policies only require them to notify users if the privacy policy changes (no restriction on scope of those changes), so maybe we should (if Dropbox were to be sold)?


Not legally, they can only do that if you implicitly agree by continuing to use the product.

If you don't interact in a meaningful way you cannot change a contract from one side you need a new agreement.

Now whether this is enforced is a different manner.


CI that isn't running on your servers wants very deep understanding of how your process works so they can minimize their costs (this is true whether or not you pay for using CI)


Totally! It's a legitimate thing! I just wish that I had more tools for dynamically providing this information to CI so that it could work better but I could also write relatively general tooling with a general purpose language.

The ideal for me is (this is very silly and glib and a total category error) LSP but for CI. Tooling that is relatively normalized, letting me (for example) have a pytest plugin that "does sharding" cleanly across multiple CI operators.

There's some stuff and conventions already of course, but in particular caching and spinning up jobs dynamically are still not there.


Rust leaves it unrestricted. As does Haskell (although I am lying a little most just choose to use Either for Result)

Rust actually gets some use out of it, for instance returning an alternative value instead of handling it as a pure failure.


Oh true, I am really interested in those choices too.


The case that comes most to mind for that in Rust is https://doc.rust-lang.org/std/primitive.slice.html#method.bi...


Thank you!


We log TBs per hour and grep is enough for me to find interesting data quite effectively.

The problem with weird log formats is recreating all the neat stuff you can do with tooling not necessarily just being able to open a file in a text editor.


While that is true of some YouTube take downs I believe this article is referring to a DMCA takedown request which while using a YouTube specific form is based off Copyright law.


Because it is considered one of the biggest problems on the platform.

Creators are frustrated that unless they are big enough to get YouTube to notice them they at minimum have to dox themselves to remove a bad DMCA claim.

Honestly in a world with Content ID they don't necessarily need the current system to remain as is to make the big publishers happy, who already get preemptive blocking of content without lifting a finger.


> Creators are frustrated that unless they are big enough to get YouTube to notice them they at minimum have to dox themselves to remove a bad DMCA claim.

Unfortunately, creators have very little leverage over YouTube, nor any realistic ability to move to a different platform. The can be frustrated all they want, but until YouTube has a reason to fear creators leaving the platform en masse, there's little pressure on them to change.


One might argue that having tons of popular entertainers who are frustrated with the platform, and there being no exclusivity agreements with those entertainers (AFAIK, and contrary to the way Twitch treats its top streamers) would create an opportunity for competitors to pop up.

That said, I don't think there is realistically anything YouTube can do do materially improve things under current laws. They can provide a real person to talk to about why the bogus claim is being taken seriously, but they still have to take it seriously.

If we want Google to fix this for us, they will have to do so via lobbying to change the laws, or at least some kind of creative but successful lawsuit that dramatically changes how they are interpreted. The product people are not capable of fixing this, it's in the lawyers' and courtiers' court (so to speak).


YouTube is not doing the bare legal minimum IIRC.

For instance the three strike system that results in your entire channel being demonetized is not required by law.

Additionally they could work with creators to get them in touch with people who can help rather than relying on social media to forward them the worst instances.


> One might argue that having tons of popular entertainers who are frustrated with the platform, and there being no exclusivity agreements with those entertainers (AFAIK, and contrary to the way Twitch treats its top streamers) would create an opportunity for competitors to pop up.

I wish, but it's not that easy. A potential competitor needs to not only start off with the ability to handle all the video uploading and delivery, but it has to also provide the audience and monetization. If YouTube was strictly a content hosting service there wouldn't be much of an obstacle in this regard, but it's also the discovery platform that audience members go to in search of content. The network effect is too strong because hosting and discovery are bundled into one.


Sure, I understand that its unlikely, but is Google willing to bet its multibillion dollar business on that? They would have to be pretty damn sure. $low_probability * $enormous_potential_losses = $still_pretty_big_expected_losses


Is this affecting YouTube’s bottom line?

I’m sure that creators would like to have a person to talk to, but it doesn’t seem that alphabet needs to provide one. Will they make more money if they did?


Demonetized means YouTube also doesn't make money.

They literally stop showing ads in front of it.


A 20 TB HDD is <$400

An 8 TB SSD is >$600

$80/TB vs $20/TB is a four fold increase.

Also a 16 TB drive is $2,000 so more like a 5x increase in a data center setup.


The 4TB M.2 SSDs are getting to a price point where one might consider them. The problem is that it's not trivial to connect a whole bunch of them in a homebrew NAS without spending tons of money.

Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].

In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.

[1]: https://www.aliexpress.com/item/1005005671021299.html

[2]: https://www.aliexpress.com/item/1005005870506081.html

[3]: https://www.aliexpress.com/item/1005006922860386.html


On AM5 you can do 6 M.2 drives without much difficulty, and with considerably better perf. Your motherboard will need to support x4/x4/x4/x4 bifurcation on the x16 slot, but you put 4 there [0], and then use the two on board x4 slots, one will use the CPU lanes and the other will be connected via the chipset.

[0] - https://www.aliexpress.com/item/1005002991210833.html


You can do without bifurcation if you use a PCIe switch such as [1]. This is more expensive but also can achieve more speed, and will work in machines without bifurcation. Downside is it uses more W.

[1] https://www.aliexpress.com/item/1005001889076788.html


The controller I linked to in my initial post does indeed contain a PCIe switch, which is how it can connect 8 PCIe devices to a single x16 slot.


Right, and whilst 3.0 switches are semi-affordable, 4.0 or 5.0 costs significantly more, though how much that matters obviously depends on your workload.


True. I think a switch which could do for example PCIe 5.0 on the host side and 3.0 on the device side would be sufficient for many cases, as one lane of 5.0 can serve all four lanes of a 3.0 NMVe. But I realize we probably won't see that.

Perhaps it will be realized with higher PCIe versions, given how tight signalling margins will get. But the big guys have money to throw at this so yeah...


I want to see any indication that abundance form AI would benefit man kind first.

While I would love Star Trek society has been going very much towards Cyberpunk aesthetic aka "the rich hold all the power".

To be precise AI models fundamentally need content to survive but they need so much content there is no price that makes sense.

Allowing AI to monetize without enriching the people who allowed it to exist isn't a good path forward.

And to be clear I do not believe there is a fundamental rift here. Shorten copyright to something reasonable like 20 years and in a decade AI will have access to all of the data it needs guilt free.


There are glimpses. Getting a high score on an Olympiad means there is the possibility of being able to autonomously solve very difficult problems in the future.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: