This data is from 2006. Over 20 years, there has been substantial progress in CT radiation reduction through model-based iterative reconstruction and now ML-assisted reconstruction, aside from iterative advances in detector sensitivity and now photon-counting CT.
In clinical practice, those doses are about 2-3x what I see on the machine dose reports every day at my place of work.
In thin patients who can hold still, I've done full-cycle cardiac CT and achieved a < 1 mSv dose. We are always trying to get the dose down while still being diagnostic.
Let's be real in most situations it doesn't matter. One "statement" per line is bog standard. Whether that statement is surrounded by parens or ended in a semicolon isn't impactful for reading.
LISP is only better when you add source code transformation (which is way easier with its source code looking like the transformed code). But then you introduce "everyone can write their own syntax" which is good and bad given the history of DSLs...
This isn't really true. Most non-Lisp languages I work in, like JS or Ruby or Python, have things like long_expression =\n long_other_expression, or long_expression\n.another_long_expression\n.another_long_expression.
When writing code you are transforming it all the time. Having only expressions (mostly) comes in very handily. It is called structured editing of code.
If 23andme has an agreement with its consumers on how it will handle the data it should not matter whether they are bought that agreement should be maintained in perpetuity unless those consumers actively choose to change their agreement.
After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?
(I am not being critical of the AG here but instead pointing out how lax consumer protections have gotten that we even need to have this be a talking point)
You're right that it should not matter. That would be a great world to live in! It's not this one, though. Companies ignore these agreements all the time. Sometimes they're even caught and their wrists get slapped.
More often (I believe) we just never learn the agreements have been broken in the first place.
But it is a rule—almost approaching a law of nature—that companies facing financial distress will begin putting a price tag on private data they've promised never to sell. It's like the cartoon with the starving people in the life raft: they look at your data, and suddenly they don't see a legal agreement to protect it, they see a juicy drumstick.
> After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?
> After all we wouldn't talk about Dropbox being sold resulting in ransacking of your personal data why is that in the conversation with 23andme?
Both 23andme and Dropbox's privacy policies only require them to notify users if the privacy policy changes (no restriction on scope of those changes), so maybe we should (if Dropbox were to be sold)?
CI that isn't running on your servers wants very deep understanding of how your process works so they can minimize their costs (this is true whether or not you pay for using CI)
Totally! It's a legitimate thing! I just wish that I had more tools for dynamically providing this information to CI so that it could work better but I could also write relatively general tooling with a general purpose language.
The ideal for me is (this is very silly and glib and a total category error) LSP but for CI. Tooling that is relatively normalized, letting me (for example) have a pytest plugin that "does sharding" cleanly across multiple CI operators.
There's some stuff and conventions already of course, but in particular caching and spinning up jobs dynamically are still not there.
We log TBs per hour and grep is enough for me to find interesting data quite effectively.
The problem with weird log formats is recreating all the neat stuff you can do with tooling not necessarily just being able to open a file in a text editor.
While that is true of some YouTube take downs I believe this article is referring to a DMCA takedown request which while using a YouTube specific form is based off Copyright law.
Because it is considered one of the biggest problems on the platform.
Creators are frustrated that unless they are big enough to get YouTube to notice them they at minimum have to dox themselves to remove a bad DMCA claim.
Honestly in a world with Content ID they don't necessarily need the current system to remain as is to make the big publishers happy, who already get preemptive blocking of content without lifting a finger.
> Creators are frustrated that unless they are big enough to get YouTube to notice them they at minimum have to dox themselves to remove a bad DMCA claim.
Unfortunately, creators have very little leverage over YouTube, nor any realistic ability to move to a different platform. The can be frustrated all they want, but until YouTube has a reason to fear creators leaving the platform en masse, there's little pressure on them to change.
One might argue that having tons of popular entertainers who are frustrated with the platform, and there being no exclusivity agreements with those entertainers (AFAIK, and contrary to the way Twitch treats its top streamers) would create an opportunity for competitors to pop up.
That said, I don't think there is realistically anything YouTube can do do materially improve things under current laws. They can provide a real person to talk to about why the bogus claim is being taken seriously, but they still have to take it seriously.
If we want Google to fix this for us, they will have to do so via lobbying to change the laws, or at least some kind of creative but successful lawsuit that dramatically changes how they are interpreted. The product people are not capable of fixing this, it's in the lawyers' and courtiers' court (so to speak).
For instance the three strike system that results in your entire channel being demonetized is not required by law.
Additionally they could work with creators to get them in touch with people who can help rather than relying on social media to forward them the worst instances.
> One might argue that having tons of popular entertainers who are frustrated with the platform, and there being no exclusivity agreements with those entertainers (AFAIK, and contrary to the way Twitch treats its top streamers) would create an opportunity for competitors to pop up.
I wish, but it's not that easy. A potential competitor needs to not only start off with the ability to handle all the video uploading and delivery, but it has to also provide the audience and monetization. If YouTube was strictly a content hosting service there wouldn't be much of an obstacle in this regard, but it's also the discovery platform that audience members go to in search of content. The network effect is too strong because hosting and discovery are bundled into one.
Sure, I understand that its unlikely, but is Google willing to bet its multibillion dollar business on that? They would have to be pretty damn sure. $low_probability * $enormous_potential_losses = $still_pretty_big_expected_losses
I’m sure that creators would like to have a person to talk to, but it doesn’t seem that alphabet needs to provide one. Will they make more money if they did?
The 4TB M.2 SSDs are getting to a price point where one might consider them. The problem is that it's not trivial to connect a whole bunch of them in a homebrew NAS without spending tons of money.
Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].
In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.
On AM5 you can do 6 M.2 drives without much difficulty, and with considerably better perf. Your motherboard will need to support x4/x4/x4/x4 bifurcation on the x16 slot, but you put 4 there [0], and then use the two on board x4 slots, one will use the CPU lanes and the other will be connected via the chipset.
You can do without bifurcation if you use a PCIe switch such as [1]. This is more expensive but also can achieve more speed, and will work in machines without bifurcation. Downside is it uses more W.
Right, and whilst 3.0 switches are semi-affordable, 4.0 or 5.0 costs significantly more, though how much that matters obviously depends on your workload.
True. I think a switch which could do for example PCIe 5.0 on the host side and 3.0 on the device side would be sufficient for many cases, as one lane of 5.0 can serve all four lanes of a 3.0 NMVe.
But I realize we probably won't see that.
Perhaps it will be realized with higher PCIe versions, given how tight signalling margins will get. But the big guys have money to throw at this so yeah...
I want to see any indication that abundance form AI would benefit man kind first.
While I would love Star Trek society has been going very much towards Cyberpunk aesthetic aka "the rich hold all the power".
To be precise AI models fundamentally need content to survive but they need so much content there is no price that makes sense.
Allowing AI to monetize without enriching the people who allowed it to exist isn't a good path forward.
And to be clear I do not believe there is a fundamental rift here. Shorten copyright to something reasonable like 20 years and in a decade AI will have access to all of the data it needs guilt free.
There are glimpses. Getting a high score on an Olympiad means there is the possibility of being able to autonomously solve very difficult problems in the future.
Commute through the Grand Central station everyday is certainly not a few hours.
And people don't tend to get a CT scan very frequently so the timeline here is massive.