Your analogy between copyright infringement and bread theft is interesting, given how "stealing bread" has traditionally been used as a shorthand for systemic inequality.
Copyright is why Disney can ruin someone for doing something with Goofy they don't like. Yes, it protects smaller, less profitable artists too, but make no mistake: it's a tool of mass control and cultural capture.
Perhaps it's time to seriously ask whether copyright is actually doing its job of "promot[ing] the Progress of Science and useful Arts." "...but generally speaking, other nations have thought that these monopolies produce more embarrasment than advantage to society" [Jefferson].
Because the one thing I don't think you can plausibly say about the security software space is that there is a lack of options.
It uniquely seems to be fragmented and messy compared to most other parts of the software industry,(not sure why, just saying what I observe.
So the market situation looks very different to the ones that the DOJ was going after (like Google in ads,if Wiz was a big ad company then maybe the government would be more interested in trying to block it). Wiz isn't even close to having some kind of insurmountably dominant market share in their specific area of expertise either.
Federal agencies pay over $65 billion to consultants each year. 98% of Booz Allen's revenues (~$11 billion) is from government consulting. I don't know what the threshold for "extreme waste" for you is, but that is a hell of a lot of money that consulting firms have been able to siphon from American taxpayers.
None of what you laid out actually explains why it is waste. Doing things costs money. The $65 billion spent on consultants could be providing $650 billion in value.
I’m unable to read the full article linked but from the first two paragraphs it was not setting itself up for explaining why it’s waste either.
These people hear a big number going to a person they are told to hate and automatically think it's fraud and waste. Simply put, anything I don't agree with is bad and needs to be cut. That's the whole philosophy.
1) I agree that it’d be a good idea to increase the federal workforce so we don’t rely on contractors so much.
2) Presumably not all of that was waste? Like screw consulting firms but presumably at least more-than-$0 of that is non-waste, and probably a fair amount of it.
3) Other sources make it clear some amount of the $65 billion is spread over multiple years (unclear exactly how it breaks down)
4) That’s the spending with the top-10 consulting firms across a large set of agencies? I’d have guessed it was higher.
If anything this is an argument for not outsourcing things to the private sector. If anything, we should grow the public sector and things become more efficient.
Which, to me, intuitively makes sense. More hops is more complexity is more friction is less efficiency.
Read through this and it certainly paints a picture of our government spending a lot of money on a legacy tool that was less efficient. The army was trying to convert a system from 1991 to be “as easy to use as an iPad”. And the biggest complaint about palantir as a competitor is that is was “not sufficiently funded” to support a broader role.
Take a step back and I just don’t see how the Army can be expected to bring the UX of an iPad to the battlefield. That would be like asking Apple to send their SWEs to the trenches.
Painful to read. I have had similar conversations with my own father, though nothing quite extreme. There is no moving them from their warped reality.
I have theorized some root causes:
- They cannot differentiate between well-meaning friends and high quality information i.e. there is a fallacy of "this person is honest, hence this forward they just sent me is true".
- Starting from at least my generation (born in late 80s), there is an understanding of "echo chamber effects", personalizing newsfeeds for engagement etc. There is some inoculation against content meant to trigger/resonate with specific sub-groups. I have found this to be completely lacking in discussions with my parents/their generation.
All these make it hard to move them out of the dis-information locus they fall into.
> Painful to read. I have had similar conversations with my own father, though nothing quite extreme. There is no moving them from their warped reality.
Perhaps you just haven't registered them doing so, but every day, people of all ages who feel clear and confident in their own convictions say the same thing about others of all ages: their peers and coworkers, their children, their elders, the youth.
For most people, it's just the nature of conviction to believe that you believe what you believe for good reason and the people who disagree with you are misinformed, stubborn, or both.
While you might be able to find surveys and polls that show some nominal bias about purported "wrong thinking" when segmented by this demographic or that one, the differences are always relatively marginal, with whatever "wrong think" worht investigating almost always slicing throughly through all segments in a substantial way. Susceptibility to "wrong think" is not meaningfully generational, and nobody's especially immune -- it seems to be just part of life that different people get convinced of different things and can sometimes be quite stuck to their convictions.
It's tragic when entrenched disagreement divides families and communities, as in this story, but it's something we can identify throughout all of history and there's no particular evidence to suggest we're likely to escape it any time soon. It may not even be wise to aspire towards it, as deep and stubborn conviction almost certainly has great merit of its own.
You're quite right. We all develop a bias about the world, and even as I say that I'm pretty good at "critical thinking", there's no telling whether I actually am. Anyone at any level of knowledge, experience, or culture can plausibly come to an implausible belief. It's easy to think ourselves correct and others, if they disagree, incorrect. Always keep that in mind. I am not the beginning. I am not the end. I have my views and my veracity will be perceived diversely by others with their own rich worldviews.
For sure, but I think GP's point (certainly mine) is that the opposite end of the spectrum is also a commonly trod landmine. The world is not split into, say, people who believe global warming is completely bunk and people who are fully informed on the best forecasts of global warming. There are levels to the information people have, and then to how people perceive that information, and then to how people communicate that information, and so on. Science is generally the correct tool for most jobs, but it would be a mistake to say that our implementation and realization of science is necessarily correct. And that's without mentioning what people then do with their knowledge.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
I hereby appeal to the guidelines. Comments like mine or GP, that may come off as controversial or offensive (which is a function of the reader), should be read and responded to with time to breathe. Considering the topic of this thread, even more so. Anyone's immediate interpretation relies on more primitive modes of cognition that are generally more emotionally saturated. I'm not bashing science or whatever. I know some people do (on here?). I want to say that intellectualism is a journey and a struggle. If the truth was easy to come by, only malicious people would be an obstacle. But people aren't generally malicious, just hurt and confused.
Here I'm saying your interpretation of my comment is wrong. You can be wrong. And as you're reading this, keep in mind that I can be wrong. I'm not just including myself to let you save face or something. I mean it. The core of intellectualism is not doing science or whatever, not in practice at least. A rational agent has no stubbornness. But as humans, having humility and self-awareness is necessary.
The scientific method is currently our best method to try to remove our biases and move towards the truth. It's certainly not perfect (funding can introduce systemic biases and can direct research away from certain topics), but it's so much better than the alternatives.
There's also the issue of people/scientists not being willing to adjust their beliefs when presented with new information. Science advances funeral by funeral
There are degrees of things. The father from this story wants all Democratic presidents of the last 35 years prosecuted for treason. For "murder" apparently as well. I mean, that's pretty far out by any standard. Never mind adjusting his entire lifestyle towards his political views (buying precious metals, survivalist gear, separating from his wife and becoming estranged from his daughter).
Everyone has "negative" impulses of all sorts. Most of us are an asshole sometimes. That's not great, but, you know, people are people. But some people are an asshole most (or all) of the time. That's not the same thing at all than being a flawed human being.
I don't think it is far out that every U.S. president in the last 35 years has had serious abuses of power, many authors argue this point eg Whitney Webb
I don't know who Whitney Webb is, but I think we both know this is not about the general trend and problems in US politics and the office of the presidency, but bollocks like Vince Foster, Benghazi, "but her emails", Barack HUSSEIN Obama from KENYA etc. etc. etc.
>> But the limiting behavior remains the same: eventually, if we continue generating from a language model, the probability that we get the answer we want still goes to zero
In the previous paragraph, the author makes the case for why Lecun was wrong with the example of reasoning models. Yet, in the next paragraph, this assertion is made which is just a paraphrasing of Yecun's original assertion. Which the author himself says is wrong.
>> Instead of waiting for FAA (fully-autonomous agents) we should understand that this is a continuum, and we’re consistently increasing the amount of useful work AIs
Yes! But this work is already well underway. There is no magic threshold for AGI - instead the characterization is based on what percentile of the human population the AI can beat. One way to characterize AGI in this manner is "99.99% percentile at every (digital?) activity".
> In the previous paragraph, the author makes the case for why Lecun was wrong with the example of reasoning models. Yet, in the next paragraph, this assertion is made which is just a paraphrasing of Yecun's original assertion. Which the author himself says is wrong.
This is a subtle point that may have not come across clearly enough in my original writing. A lot of folks were saying that the DeepSeek finding that longer chains of thought can produce higher-quality outputs contradicts Yann's thesis overall. But I don't think so.
It's true that models like R1 can correct small mistakes. But in the limit of tokens generated, the chance that they generate the correct answer still decays to zero.
I think this is an excellent way to think about LLM's and any other software-augmented task. Appreciate you putting the time into an article. I do think your points supported by the graph of training steps vs. response length could be improved by including a graph of (response length vs. loss) or (response length vs. task performance), etc. Though # of steps correlates with model performance, this relationship weakens as # steps goes to infinity.
There was a paper not too long ago which illuminated that reasoning models will increase their response length more or less indefinitely toward solving a problem, but the return from doing so asymptotes toward zero. My apologies for missing a link.
Lecun's argument is fundamentally flawed. When I work on a nontrivial problem, I might make mistakes along the way also. That doesn't mean that large multi-step problems are effectively unsolvable. I simply do sanity checks along the way to catch errors and know to correct them.
I do anticipate it, but in the situations I'm asked to do such calculations, I don't usually have the option of refusing, nor would I want to. For most real would situations, it's generally better to arrive at a ballpark solution than to refuse to engage with the problem.
In the very unserious hypothetical I'm describing, I'd say Lloyd's capabilities match that of GPT-4. In this case, he's not a calculator, but he is a decent programmer, so like GPT-4 he quickly runs the operation through a script, rather than trying to figure it out in his head.
As a software engineer, this post resonates with me.
But, you can find this attitude pervading the whole linux desktop ecosystem. This post may as well be titled "Why it will never be the year of the Linux Desktop".
I don't really care about "the Linux Desktop" and I don't quite understand why so many people do. I use Linux and it works great for me. If it never makes inroads among my non-technical friends and family, who cares? They are doing fine without it.
I guess this is what survives of the 90s/early-2000s Slashdot mentality where people defend software for ideological reasons and get in Linux vs. Windows fanboy fights, but I think that's anachronistic today. Linux Desktop is never going to take over among average people (increasingly many of whom don't even use desktops at all) and that's fine.
Neural networks can encode any computable function.
KANs have no advantage in terms of computability. Why are they a promising pathway?
Also, the splines in KANs are no more "explainable" than the matrix weights. Sure, we can assign importance to a node, but so what? It has no more meaning than anything else.
I am not sure it is useful to bring in something as nebulous as "intelligence" and hand wave everything else away, unless you are going to tightly define what intelligence means.
There are only two objective measurements needed:
-is it making progress towards its goal?
-is it able to acquire capabilities it didn't have previously?
I am not sure if even the first one is objective enough.
Dismissing the argument without stating why you aren't convinced just comes across as a form of AI ludditism.
Really? IMO capabilities can be enumerated as a set of challenges in the category of things you want done. We don't need to discuss if an IC is "intelligent" to agree that the original $5 Pi Zero is "more capable" at that than all of humanity combined.
Sure, you can also say that GPT-4's passing the Bar tells you it can pass the kind of questions in the Bar exam without that extending to the kind of questions actual lawyers need to do, Goodhart's law remains if that was your point?
Can you please not break the site guidelines when posting here? You did it twice in this thread unfortunately (the other place was here: https://news.ycombinator.com/item?id=41773709).
If you'd please review https://news.ycombinator.com/newsguidelines.html and make your substantive points thoughtfully and respectfully, regardless of how wrong someone is or you feel they are, we'd appreciate it.
Yes. In this case, it is the artist's sole right to reproduce said images, based on their creative output.
>> decreases scarcity, it doesn't increase it
What does scarcity have to do with stealing? You can steal bread and reduce food scarcity, but that is still theft.