Important to note this isnt a majority: 30% of eligible voters chose the president; and even of those, hardly any are very engaged with his current policies and behaivour.
It's important not to so quickly cede the democractic ground here -- this isnt a democractic movement. It's a 49% election of a president, with 30% of the eligble voters, who collectively did not vote for the constitution to be suspended. They voted for a president, an office which exists by and within the framework of that constitution. There was no referendum on whether the constitution should be amended to allow for effectively unlimited presidental power.
In games you have 16ms to draw billion+ triangles (etc.).
In web, you have 100ms to round-trip a request under abitarily high load (etc.)
Cases where you cannot "stop the world" at random and just "clean up garbage" are quite common in programming. And when they happen in GC'd languages, you're much worse off.
Azul C4 is not a pauseless GC. In the documentation it says "C4 uses a 4-stage concurrent execution mechanism that eliminates almost all stop-the-world pauses."
> C4 differentiates itself from other generational garbage collectors by supporting simultaneous-generational con-
currency: the different generations are collected using concurrent (non stop-the-world) mechanisms
(As with any low-pause collector, the rest of your code is uniformly slower by some percentage because it has to make sure not to step on the toes of the concurrently-running collector.)
The benchmarks game shows memory use with default GC settings (as a way to uncover space-time tradeoffs), mostly for tiny tiny programs that hardly use memory.
Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
> Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
yes, I referred to benchmarks with large memory consumption, where Java still uses from 2 to 10(as in binary tree task) more memory, which is large overhead.
Well, 1) the temporary allocator strategy; and 2) `defer` kinda go against the spirit of this observation.
With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".
I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.
There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.
Neither of those gives memory safety, which is what the parent comment is about. If you release the temporary allocator while a pointer to some data is live, you get use after free. If you defer freeing a resource, and a pointer to the resource lives on after the scope exit, you get use after free.
The dialetic beings with OP, and has pcw's reply and then mine. It does not begin with pcw's comment. The OP complains about rust not because they imagine Jai is memory safe, but because they feel the rewards of its approach significantly outweight the costs of Rust.
pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.
I don't understand this take at all. The borrow checker is automatic and works across all variables. Defer et al requires you remember to use it, and use it correctly. It takes more effort to use defer correctly whereas Rust's borrow checker works for you without needing to do much extra at all! What am I missing?
What you're missing is that Rust's borrowing rules are not the definition of memory safety. They are just one particular approach that works, but with tradeoffs.
Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.
That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).
That is true of `&mut T`, but `&mut T` is not the only way to do mutation in Rust. The set of possible safe patterns gets much wider when you include `&Cell<T>`. For example see this language that uses its equivalent of `&Cell<T>` as the primary mutable reference type, and uses its equivalent of `&mut T` more sparingly: https://antelang.org/blog/safe_shared_mutability/
> The borrow checker is automatic and works across all variables.
Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".
What's hilarious about "fighting the borrow checker" is that it's about the lexical lifetime borrow checking, which went away many years ago - fixing that is what "Non-lexical lifetimes" is about, which if you picked up Rust in the last like 4-5 years you won't even know was a thing. In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.
Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)
As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".
I appreciate what you're saying, though isn't undefined behavior having to do with the semantics of execution as specified by the language? Most languages outright decline to specify multiple threads of execution, and instead provide it as a library. I think C started that trend. I'm not sure if Jai even has a spec, but the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.
> the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.
Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.
> because it is the kind of optimizing compiler you say it is
What other kind of optimisations are you imagining? I'm not talking about a particular "kind" of optimisation but the entire category. Lets look at two real world optimisations from opposite ends of the scale to see:
1. Peephole removal of null sequences. This is a very easy optimisation, if we're going to do X and then do opposite-of-X we can do neither and have the same outcome which is typically smaller and faster. For example on a simple stack machine pushing register R10 and then popping R10 achieves nothing, so we can remove both of these steps from the resulting program.
BUT if we've defined everything this can't work because it means we're no longer touching the stack here, so a language will often not define such things at all (e.g. not even mentioning the existence of a "stack") and thus permit this optimisation.
2. Idiom recognition of population count. The compiler can analyse some function you've written and conclude that it's actually trying to count all the set bits in a value, but many modern CPUs have a dedicated instruction for that, so, the compiler can simply emit that CPU instruction where you call your function.
BUT You wrote this whole complicated function, if we've defined everything then all the fine details of your function must be reproduced, there must be a function call, maybe you make some temporary accumulator, you test and increment in a loop -- all defined, so such an optimisation would be impossible.
>In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.
What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
> What does come up in practice is partial borrowing errors.
For some people. For example, I personally have never had a partial borrowing error.
> it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
This is not for sure. That is, while it's code that could work, it's not obviously clear that it's correct. Rust cares a lot about the contract of function signatures, and partial borrows violate the signature, that's why they're not allowed. Some people want to relax that restriction. I personally think it's a bad idea.
> Rust cares a lot about the contract of function signatures, and partial borrows violate the signature
People want to be able to specify partial borrowing in the signatures. There have been several proposals for this. But so far nothing has made it into the language.
Just to give an example of where I've run into countless partial borrowing problems: Writing a Vulkan program. The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state. (of course, this is not safe, because you could have accidental mutable aliasing).
But in Rust, that just doesn't work. You get countless errors like "Can't call self.resize_framebuffer() because you've already borrowed self.grass_texture" (even though resize_framebuffer would never touch the grass texture), "Can't call self.upload_geometry() because you've already borrowed self.window.width", and so on.
So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
It would be so much nicer if you could instead annotate that resize_framebuffer only borrows self.framebuffer, and no other part of self.
> People want to be able to specify partial borrowing in the signatures.
That's correct. That's why I said "Some people want to relax that restriction. I personally think it's a bad idea."
> The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state.
Yes, I think that this style of programming is not good, because it creates giant balls of aliasing state. I understand that if the library you use requires you to do this, you're sorta SOL, but in the programs I write, I've never been required to do this.
> So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
I am not sure that it's a good idea to change the language to make using poorly designed APIs easier. I also understand that reasonable people differ on this issue.
>Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
What they're describing is the downstream effect of not designing APIs that way. If you could have a single giant GraphicsState and define everything as a method on it, you would have to pass around barely any arguments at all: everything would be reachable from the &mut self reference. And either with some annotations or with just a tiny bit of non-local analysis, the compiler would still be able to ensure non-aliasing usage.
"functions that each take 20 parameters and return 5 values" is what you're forced to write in alternative to that, to avoid partial borrowing errors: for example, instead of a self.resize_framebuffer() method, a free function resize_framebuffer(&mut self.framebuffer, &mut self.size, &mut self.several_other_pieces_of_self, &mut self.borrowed_one_by_one).
I agree that the severity of this issue is highly dependent on what you're building, but sometimes you really do have a big ball of mutable state and there's not much you can do about it.
A lot has been written about this already, but again I think you're simplifying here by saying "once you get it". There's a bunch of options here for what's happening:
1. The borrow checker is indeed a free lunch
2. Your ___domain lends itself well to Rust, other domains don't
3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.
There's probably more things that could be going on, but I think this is clear.
I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.
"But after an initial learning hump, I don't fight the borrow checker anymore" is quite common and widely understood.
Just like any programming paradigm, it takes time to get used to, and that time varies between people. And just like any programming paradigm, some people end up not liking it.
I'm not sure what you mean here, since in different replies to this same thread you've already encountered someone who is, by virtue of Rusts borrow checker design, forced to change his code in a way that is, to that person, net negative.
Again this person has no trouble understanding the BC, it has trouble with the outcome of satisfying the BC. Also this person is writing Vulkan code, so intelligence is not a problem.
> is quite common and widely understood
This is an opinion expressed in a bubble, which does not in any-way disprove that the reverse is also expressed in another bubble.
"common" does not mean "every single person feels that way" in the same sense that one person wanting to change their code in a way they don't like doesn't mean that every single person writing Rust feels the way that they do.
If your use case can be split into phases you can just allocate memory from an arena, copy out whatever needs to survive the phase at the end and free all the memory at once. That takes care of 90%+ of all allocations I ever need to do in my work.
For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.
I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.
Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.
There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.
You can explain this sort of pattern to the borrow checker quite trivially: slap a single `'arena` lifetime on all the references that point to something in that arena. This pattern is used all over the place, including rustc itself.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
I remember having multiple issues doing this in rust, but can't recall the details. Are you sure I would just be able to have whatever refs I want and use them without the borrow checker complaining about things that are actually perfectly safe? I don't remember that being the case.
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
The main issue with using arenas in Rust right now is that the standard library collections use the still-unstable allocator API, so you cannot use those with them. However, this is a systems language, so you can use whatever you want for your own data structures.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
wavemode's comment only applies to `&mut T`. You do not have to use `&mut T` to form the reference graph in your arena, which indeed would be unlikely to work out.
Not sure about the implicit behavior. In C++, you can write a lot of code using vector and map that would require manual memory management in C. It's as if the heap wasn't there.
Feels like there is a beneficial property in there.
Iirc, pretty sure jblow has said he's open sourcing it. I think the rough timeline is: release game within the year, then the language (closed-source), then open source it.
Tbh, I think a lot of open source projects should consider following a similar strategy --- as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
it's not even contributions, but that other people might start asking for features, discuss direction independently (which is fine, but jblow has been on the record saying that he doesn't want even the distraction of such).
The current idea of doing jai closed sourced is to control the type of people who would be able to alpha test it - people who would be capable of overlooking the jank, but would have feedback for fundamental issues that aren't related to polish. They would also be capable of accepting alpha level completeness of the librries, and be capable of dissecting a compiler bug from their own bug or misuse of a feature etc.
You can't get any of these level of control if the source is opened.
You can simply ignore them. This worked for many smaller programming languages so far, and there exist enough open source softwares that are still governed entirely by a small group of developers. The closedness of Jai simply means that Blow doesn't understand this aspect of open source.
Ignoring people is by itself tedious and onerous. Knowing what I do about him and his work, and having spent some time watching his streams, I can say with certainly that he understands open source perfectly well and has no interest -- nor should he -- in obeying any ideology, yours for instance, as to how it's supposed to be handled, if it doesn't align with what he wants. He doesn't care whether he's doing open source "correctly."
Yeah, he is free to do anything as he wants, but I'm also free to ignore his work due to his choice. And I don't think my decision is unique to me, hence the comment.
Maybe there's aspirations to not be a "smaller programming language" and he'd rather not cause confusion and burn interested parties by having it available.
Releasing it when you're not ready to collect any upside from that decision ("simply ignore them") but will incur all the downside from a confused and muddled understanding of what the project is at any given time sounds like a really bad idea.
It seems to be there's already enough interest for the closed beta to work.
A lot of things being open sourced are using open source as a marketing ploy. I'm somewhat glad that jai is being developed this way - it's as opinionated as it can be, and with the promise to open source it after completion, i feel it is sufficient.
Yep. A closed set of core language designers who have exclusive right to propose new paths for the language to take while developing fully Free and in the open is how Zig is developing.
That kind of means jack squat though. Jai is an unfinished *programming language*, Sqlite is an extremely mature *database*.
What chii is suggesting is open sourcing Jai now may cause nothing but distractions for the creator with 0 upside. People will write articles about its current state, ask why it's not like their favorite language or doesn't have such-and-such library. They will even suggest the creator is trying to "monopolize" some ___domain space because that's what programmers do to small open source projects.
That's a completely different situation from Sqlite and Linux, two massively-funded projects so mature and battle-tested that low-effort suggestions for the projects are not taken seriously. If I write an article asking Sqlite to be completely event-source focused in 5 years, I would be rightfully dunked on. Yet look at all the articles asking Zig to be "Rust but better."
I think you can look at any budding language over the past 20 years and see that people are not kind to a single maintainer with an open inbox.
We can muse about it all day, the choice is not ours to make. I simply presented the reality that other succcessful open source projects exist that were also in 'early development state'.
There are positives and negatives to it, I'm not naive to the way the world works. People have free speech and the right to criticise the language, with or without access to the compiler and toolchain itself, you will never stop the tide of crazy.
I personally believe that you can do opensource with strong stewardship even in the face of lunacy, the sqlite contributions policy is a very good example of handling this.
Closed or open, Blow will do what he wants. Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I don't have a stake in this particular language or its author, I was just discussing the pros and cons of the approach.
> Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I outlined some reasons why I think it would, and I think there's good precedent for that.
> the choice is not ours to make
I never said it was.
> People have free speech
I don't think I argued people don't have free speech? This is an easily defensible red herring to throw out, but it's irrelevant. People can say whatever they want on any forum, regardless of the projects openness. I am merely suggesting people are less inclined to shit on a battle-tested language than a young, mold-able one.
Interesting, they've softened their stance. Today, it reads
> In order to keep SQLite in the public ___domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public ___domain.
But it used to read
> In order to keep SQLite in the public ___domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from unknown persons.
Seems to be hardened not softened: a person who has submitted an affidavit dedicating code fo the public ___domain is at least minimally known, but a person may be known without submitting an affidavit, so the new form is strictly a stronger restriction than the old one.
You say this now but between 2013 - around 2023, The definition of Open source is that if you dont engage with the community and dont accept PRs it is not open source. And people will start bad mouth the project around the internet.
Linux started before 2013? So did SQLite? And both are not even comparable as they were the dominant force already and not a new started project.
And in case you somehow thinks I am against you. I am merely pointing out what happened between 2013 - 2023. I believe you were also one of the only few on HN who fought against it.
Open source softwares with closed development model have existed for a very long time so that should have been considered no matter it was considered as open source or not. (And I think it was around 2000s, not 2010s, when such misconception was more widespread.)
I don't think the issue is just contributions. It's the visibility.
When you're a somewhat famous programmer releasing a long anticipated project, there's going to be a lot of eyes on that project. That's just going to come with hassle.
Well, it is the public internet, people are free to discuss whatever they come across. Just like you're free to ignore all of them, and release your software Bellard-style (just dump the release at your website, see https://bellard.org/) without any bug tracker or place for people to send patches to.
Having a lot of eyes on it is only a problem if you either have a self-esteem problem and so the inevitable criticism will blow you up or, you've got an ego problem and so the inevitable criticism will hurt your poor fragile ego. I think we can be sure which of these will be a problem for Jonathan "Why didn't people pay $$$ for a remaster of my old game which no longer stands out as interesting?" Blow.
yep and JBlow is a massive gatekeeper who discourages people from learning programming if he doesn't believe they can program the way he thinks a programmer should. He is absolutely running from any criticism that will hurt his enormous yet incredibly fragile ego.
The hate he is receiving is bizarre. It takes guts to be opinionated - you are effectively spilling your mind (and heart) to people. And yet some people will assume the worst about you even if it's an exact inversion of the truth.
It's not a "misconception". Open source implying open contributions is a very common stance, if not even the mainstream stance. Source availability is for better or for worse just one aspect of open source.
It is a misconception. Open source doesn’t mean the maintainer needs to interact with you. It just means you can access the code and do your own fork with whatever features you like.
Open Source definition ( https://opensource.org/osd ) says nothing about community involvement or accepting contributions. It may be common, but it is not necessary, required or even hinted at in the license.
For many it is very much a philosophy, a principle, and politics. The OSI is not the sole arbiter of what open source is, and while their definition is somewhat commonly referred to, it is not the be all end all.
> Any software is source-available in the broad sense as long as its source code is distributed along with it, even if the user has no legal rights to use, share, modify or even compile it.
You have the legal right to use, share, modify, and compile, SQlite's source. If it were Source Available, you'd have the right to look at it, but do none of those things.
IMO the main thing they're risking by open sourcing it is adoption. Keeping it closed source is a pretty clear sign to the rest of the world that the language is not ready for widespread adoption. As soon as you open source it, even if you mark it as alpha, you'll end up with people using the language, and breaking changes will at that point break people's code.
Keeping things closed source is one way of indicating that. Another is to use a license that contains "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED [...]" and then let people make their own choices. Just because something is open source doesn't mean it's ready for widespread adoption.
You're describing pretty much every popular open source license here, including the Linux kernel(GPLv2). This doesn't set the expectation that things can and will break at any time. That's also not the approach maintainers take with most serious projects.
There is a lot of experimentation going on as well. Few months ago 2 new casting syntaxes were added for users to evaluate. The plan is to keep only one and remove the others before release.
That’s what I meant by forked. If Jonathan wants to keep his branch closed source, that’s fine, as long as he cuts a release, gives it a GNU license and calls it OpenJai or something. He doesn’t have to deal with the community, somebody will do that for him.
An argument can easily be make that Jai could have been released as closed-source, some time ago. Many fans and the curious, just want to be able to get their hands on it.
Jon is not going to stop public reaction nor will Jai be perfect, regardless of when he releases. At least releasing sooner, allows it to keep momentum. Not just generated by him, but by third parties, such as books and videos on it. Maybe that's where Jon is making a mistake. Not allowing others to help generate momentum.
What's 'burning' is the hospitals, military research, medical research and the vast array of technical R&D that congress has requested harvard to perform.
This is just an attack on americans. Harvard is secure regardless of what destruction the presidency does to the projects congress has asked of it.
I had been somewhat neutral on trump -- the grievances of the american right were real and under served. Major civil institutions of power and culture had been monopolised by the left; there had been a "default preference" for wealth over income, capital over labour. Immigration had been treated as a purely economic question, with little regards to the suddenness of population and cultural changes metered out on communities which took on the highest levels.
I had thought the leftwing reaction to accuse this of authoritarianism, overblown. Many of the actions that had been taken were taken by previous leftwing administrations, just with less publicity (, and so on).
However I think the rubicon has been crossed. The president now believes he has impunity to engage in extrajudicial rendition to enslave people, including citizens, in foreign prisons. He attacks the centres of civil power: universities, law firms, (likekly soon, ) the mass media. And rival state power: ignoring the supreme court, congress (ie., reorganising federal gov beyond his power), and the institional professional class in the executive.
All the while, increasingly I see people on the centre-right in the mass media credulously believing the president's account of his actions. Identifying with the president as an expression of their power, and believing likewise, that the whole of civil society is legitimately brought under state ideological control. That the presidency is the state, that state is society, and that society must "for democratic reasons" be brought to the state's heel.
The next phase of this will be very dangerous for the american people. I think civil resistance will be target for at best, imprisonment -- perhaps even rendition to a foreign prison. All one needs to say is that the resistance protestors are domestic terroists, and trump has a wide base of people credulously willing to believe it -- no doubt looting and other things will occur. It is very easy to imagine state elections being brought under "federal control" and a process of election rigging soon following.
As far as I can see there are two forces acting against the possibility of an american tyranny: trump's own desire to perform what's he's doing completely destabilises his plans (eg., on the economy especially). Secondly, the federalism of the american system.
It seems now plausible to me to imagine a future in which a democractic state violently expels federal forces, esp., eg., if ICE are used to rendition american citizens. It will be almost an obligation of states to suspend federal police presense. This, in the end, may make totalisation of federal state power difficult.
> Major civil institutions of power and culture had been monopolised by the left; there had been a "default preference" for wealth over income, capital over labour.
I am not from the US, and I watch with mild amusement its slide into full blown banana republic dictatorship with a sprinkle of last century European fascism - I mean, at this point ICE is basically a secret police that disappears people, not unlike Stasi or Gestapo from years past.
But you thought that Trump was an answer to "wealth over income" or "capital over labor"? Even without knowing that much about the intricacies of US politics this sounds pretty naive.
Well the tarrifs do very much show that Trump has no "default preference" for the capitalised class -- he's very willing to wreck the american stock market.
Whether his solution works or not isnt relevant to whether Trump's real preferences aren't, "by default", the american corporate owner.
It's very unhelpful to reduce trump down to basic evil motivations, and to call any ascription of a non-evil one, "naive". It has been this manner which has made the left entirely unable to communicate beyond its self.
> Well the tarrifs do very much show that Trump has no "default preference" for the capitalised class -- he's very willing to wreck the american stock market.
Yes, Marxist-Leninist governments also wreck their local stock markets. That doesn't mean they, or Trump, are engaged in building a superior economic system for prioritizing labor over capital.
> Well the tarrifs do very much show that Trump has no "default preference" for the capitalised class
His preference seems to be to favor those that suck up to him.
I mean, that is why all the top billionaires are all very cozy to him nowadays. They may be assholes, but they are smart assholes. Psychopathically smart.
> It's very unhelpful to reduce trump down to basic evil motivations
I didn't reduce him to evil motivations. I just said it was naive to think he would somehow benefit labor and not capital or wealth.
Write down on a sticky note "if the government sends a US citizen to CECOT I will..." and fill in the rest of this sentence. Put it somewhere you see it everyday.
I'm personally absolutely sick of the "oh it is not a problem until..." lines moving basically daily. Everybody defending this administration needs to commit to a line otherwise I fully expect to see posts saying what you are saying here with ever more brutal and violent outcomes from the state for the rest of time.
The problem is that there's no mechanism to prevent that. And there's a pretty clear route to it happening: first mistaken immigrants, then mistaken murders, then mistaken "domestic terrorists" -- and you have the federal gov. disappearing political opponents.
The issue with these extrajudicial renditions to foreign prisions is the extrajudicial part. The rest of it is just immoral -- the former part, a catastrophe.
The extrajudicial part has been around since the Clinton administration[1]. Somehow neither Obama nor Biden chose to get rid of this policy.
> Within days of his 2009 inauguration, Barack Obama signed an executive order opposing rendition torture and established a task force to provide recommendations about processes to prevent rendition torture. His administration distanced itself from some of the harshest counterterrorism techniques but permitted the practice of rendition to continue
The new part here is that it's foreign nationals being taken from US soil instead of another country's soil.
Yes, the new part is that it's extrajudicial. The rendition of foreign citizens can be done without a court, because they aren't covered by the US constitution. Only those "under the power of the united states government" are granted due process by US courts.
The line being cross in taking persons unknown to courts from the united states, to a forieng country, isnt "a new part". It's to suspend the constitution and grant the president the power not merely to arbitarily detain, but to do so in a foreign prison.
It's hard to understate how serious this is. If it were only this, and nothing else, we might hope it will stay bounded by the "hopefully" diligient ICE. But coupled with the assault on all rival systems to presidental power, there's nothign to be hopeful about.
The constitution has been suspend, the president has sequestered the force of the federal government to bring under his private power the whole of american society, begining with the most powerful rivals: the courts, the media, the universities, the law first, and so on.
He will next suspend the broadcasting licence for media outlets.
Optimistically, the supreme court could suspend his emergency powers -- as they ought, since there is no war or emergency. This may make the federal government unable to execute his wishes -- but if they've replaced enough workers there already, it might be too late.
> Do you have an example of ICE "disappearing" a US citizen
I mean, once they start disappearing people that are completely legal in the country, disappearing citizens is just a minor step forward.
By all means, I am not in the US, I'll keep enjoying my popcorn from afar. I wonder if when the ovens are turned on in some Central American death camp you will move the goalposts to "but, but we don't even have gas showers yet".
> cry wolf
The wolf has been here for a while buddy, we are just discussing what color and size it is.
Trump has already publicly alluded to shipping citizens to El Salvador, aka "disappearing". That means its already a possibility in his mind, which brings us pretty close to it happening.
Other factors to consider in the "states versus federal" conflicts that could occur are that each state has its own National Guard forces and equipment which are under the state governor's control. The National Guard are under dual control in that they can respond to the state's needs or to federal needs. But they are still citizens of that state who put on the uniform when needed.
This could lead to National Guard versus federal forces stand-offs as was seen in the 1960s over Civil Rights disagreements between state and federal governments:
Another factor that differentiates the United States in conflicts of the people against their government is how heavily armed and resourceful the US populace is. In the War on Terror, US Armed Forces faced insurgency militias in Iraq and Afghanistan. If similar insurgency militias were to arise in the United States in response to illegal federal government actions, it would probably have similar results.
Indeed. That law and several others grant the president expanded powers to handle emergency situations. However, who determines what is an emergency and if the expanded powers are needed? In many cases, it is the president himself (or herself).
The underlying assumption is that the president would use such powers judiciously and that the expanded powers would enable emergency situations to be handled quickly since it would probably take more time for congress to respond and get things done.
The question now is: what if such powers are not used judiciously? What recourse is there?
> who determines what is an emergency and if the expanded powers are needed?
Congress. But so far they're letting Trump declare emergencies left and right. All of his tariffs are being enacted under an "emergency" to bypass Congress (since under non-emergency situations, Congress sets tariffs).
Had the Democrats won control of the House/Senate, a lot of this nonsense wouldn't be happening (or even if conventional Republicans had control, which is why Trump 1.0, when conventional GOP held the senate, Trump couldn't go off the rails as he has now).
There's also State Guard / State Defense Forces, which are solely under state jurisdiction. But in many states, and especially in blue states, it has devolved into a uniformed and militarized but unarmed organization.
However, this entire line of thought presupposes that those people (whether in NG or SDF) would align themselves with the state and against the feds, and that's not given at all. I know from personal experience of close interaction with my local right-wing militias in WA state that quite a few members are in NG or WSG.
The same goes for armed populace. It's true that there's a lot of weaponry in private hands, and we're not just talking your stereotypical AR-15, but stuff like say .50 BMG anti-materiel rifles, grenade launchers, and even privately owned tanks and artillery in some cases. However, they are disproportionally in hands of people who lean far right, so in event of open conflict I would expect them to work with the feds. There are some of us on the left who are heavily armed precisely so as to counterbalance that, but we are outnumbered by an order of magnitude. Then there's the issue of training - right-wing militias actually get together and train, and while it is derided as LARPing - often for good reasons - it's still better than nothing. More importantly, it's not just training but also networking - those people know each other and have plans to get together and coordinate "when it's time".
Indeed, a particularly nasty possibility is that Trump wouldn't even need to issue any kinds of explicit orders to federal troops, but rather just let the right wing paramilitaries loose by simply not doing anything to stop them (and making it widely known that there will be no consequences).
Is there a mea culpa here? This was all clear for a decade during which you were "somewhat neutral on Trump" and everyone was telling those of us warning people about it that we were hysterical and deranged.
But now I see posts like this and it's like "how could we have known this was going to happen?". Well, you could have! At least maybe you can update your priors on how seriously to take warnings that a political movement is dangerous?
I went the other way - 2016-2020 I truly believed the US was sliding into dictatorship, but the constant cries of "Nazi" and "fascism" followed by nothing of the sort taking place have completely desensitized me to these accusations. I've also taken time to listen to the opinions of supposed "fascists" like Jordan Peterson and found them reasonable even if I don't always agree with them. The vaguely dictatorial vaccine mandate policy pursued by the Democrats and the way the trucker convoy was handled didn't help either.
Now I read every comparison to the Nazis with a huge grain of salt and I'm "somewhat neutral" on Trump.
Out of curiosity, what would your rubicon[1] be in the current circumstance? I've found it useful to draw specific internal lines instead of trying to pull apart tit-for-tat reactions between two belligerents.
Yeah, and as of a month ago, I was in exactly the same position.
The extrajudicial enslavement of legal immigrants into foreign prisons, "crosses the rubicon". You cannot have presidential power operating in this way. It's not the immigrant part, it's the extrajudicial part. If trump has this power against anyone, he then has it against everyone.
Against that backdrop you have the targeting of law firms that have represented political opponents of the president; the attempt to totalise control of universities, and so on.
The whole thing is now tettering on the edge of what was previously just hysteria.
Perhaps the final nail in the coffin for me has been seeing online how credulous the right has been about the government's propaganda. This tells me that the conditions for totalitarianism are here in the people -- a mass of people identify with trump, uncritically believe the propaganda. The dismantling of rival power centres in all of american society and government is taking place whilst a large number of people applaud.
People havent yet seen the transition that has taken place within the Trump government. Before the Musk programme, the deportations, etc. were all on the extreme-side of constitutional presidental power.
We are actually now past that, and his supporters are operating as if we're not. They don't realise they're applauding what they will severely come to regret. They think they're applauding the end of DEI, of elite power, of the stock-owning class. When in fact, it's pretty clear now, these are just the grievances benig used to establish unlimited intrusion of the presidency into all aspects of civil and political life.
The next mass protest will precipitate a crisis of the legitimacy of the federal monopoly on violence in the US. Unless some means can be deployed soon to constrain the president, america is in a very dangerous position.
What, exactly, do you think the left has been doing for the last ~10 years? The universities are basically captured by one party and their ideology and I suspect Trump's hamfisted attempts to counter that will only make a small dent.
Universities are at a point where getting a job often requires including a DEI statement in your CV[1]. In my opinion, this is not compatible with academic freedom.
The overall feeling I get is one of despair. Neither the Democrats nor the Republicans are fundamentally interested in "freedom". They just choose to nibble away at different corners of the constitution.
The problem becomes when the president does it; and how he does it.
Since the president has vast formal and informal powers, any use of his power to achieve a totalisation of his ideology into society is "alarms going off territory".
When he has taken down the law firms, suspended the licenes of the media companies, sequestered the national guard against protestors, and deported political opponents --- at that stage, what will be left to protect you?
The president, as one man, cannot wield the full power of governmenrt -- this is tyranny. And, esp. cannot weild it against civil society, this is totalitarianism.
>Universities are at a point where getting a job often requires including a DEI statement in your CV[1]. In my opinion, this is not compatible with academic freedom.
>What, exactly, do you think the left has been doing for the last ~10 years?
Which single person is "the Left" here? You're basically proposing establishing a personalist dictatorship to combat a bureaucratic para-party. Your solution to authoritarianism is intensified authoritarianism.
>The universities are basically captured by one party and their ideology and I suspect Trump's hamfisted attempts to counter that will only make a small dent
How were they captured? What evidence do you have?
>Democrats nor the Republicans are fundamentally interested in "freedom".
The thing with the US Presidency is that in the first term, you have to be at least somewhat motivated to doing things that could get you a second term...
The second term rolls around, and now you can do whatever you like, because you're done at the end of that. At least in theory.
The Nazi / Hitler comparison was always a poor fit, in my view. But what most people were actually saying was just "this person does not care about the laws and values that keep our government from being a tyranny".
And what happened is like Y2K: People who recognized the risks successfully worked to mitigate the worst of them. It's not really surprising, but it is frustrating, that just like with Y2K, many people thus concluded that it was not necessary to mitigate the risks.
For many people, mitigating risks provides evidence that there were never any risks in the first place. (You can probably think of more examples of this.)
But unfortunately, people were correct when they identified that Trump's character combined with increasing control over one of the two political parties could pose a that to our system. And now it's harder to mitigate the problem, because the control over the party has advanced significantly further.
Suppose they are not Nazis nor fascists, but mere authoritarians like Putin. Does that make it any better?
I grew up in Russia at the time when we had a brief stint with democracy. I remember how people elected Putin because he was supposed to fix everything that was wrong, and how they laughed at those of us who said that it would be a dictatorship before soon.
I guess for all my dislike of what liberalism has become, I was still to liberal in my thinking. Ie., that the presidency "by construction" is quite a powerless office, everything has to go through congress, the courts stop half of what any president wants to do.
If trump had been in this straightjacket I had expected, I would not mind that "on this go around" the american right, with its grievances, has them heard by american society.
The problem of american politics, over the last decade or two, has been the complete cultural maginalisation of the right (from centres of civil power). Something had to give. The universities, the corporate culture, the internet mass media -- had all been monpolised by a "consensus moralism" which was replusive to a lot of people.
I didnt feel able to continue to deny those people their representation. However, I hadn't seen how easily the straighjackets of the constituion were this easy to disregard if you only have enough people at the top to do it.
> The problem of american politics, over the last decade or two, has been the complete cultural maginalisation of the right (from centres of civil power). Something had to give. The universities, the corporate culture, the internet mass media -- had all been monpolised by a "consensus moralism" which was replusive to a lot of people.
I see this offered a lot as an example of a "missing middle", that conservative ideals are systematically underrepresented in e.g. universities or popular culture, and the explanation offered by conservative thinkers is that there's some shadowy force at play.
Could it not just be that these ideals are unpopular? The classic tale of a kid going off to college and coming back with more liberal politics is offered as an example of brainwashing or "consensus moralism," but maybe it's because they were genuinely convinced to shift their worldview.
>he problem of american politics, over the last decade or two, has been the complete cultural maginalisation of the right (from centres of civil power). Something had to give. The universities, the corporate culture, the internet mass media -- had all been monpolised by a "consensus moralism" which was replusive to a lot of people.
I share the frustration of many commenters that you're just now coming to believe that Trump is a dangerous threat to our entire system. It's bewildering to hear people say some variation of "how could we have known??", when it has all seemed so obvious to many of us for years that this is the road we were going down.
That said, I do deeply appreciate your willingness to change your mind, and to talk about it publicly. The reality is that a third of our society is in Trump's thrall. At my best, I don't want those people to disappear, or suffer in powerlessness for years. I want them to change their mind, and I know how hard that can be. So thank you!
How can you have been neutral on Trump until just now and then wrote that? This both-sides-ism looks a bit ludicrous. Neither side is perfect but one is a propagandistic cult and the other is a reasonable status quo party. One wants to throw hand grenades into every room of the government out of spite and out of desire to enrich and empower the billionaire class. And you’re now having this huge intellectual reckoning? Where were you the last 9 years of Trump?
I for one think better late than never. There's no shame in falling for a movement this big, if you eventually realize it's built on a mountain of lies and decide to take a step away from it.
Trump is very effective at selling people their grievances; at identifying problems, "with the right emotional tone", and so on. Obviously, he's completely unable to solve any of them -- and mostly lacks the interest in doing so.
Since I sympathised with the people who sympathised with him, I did not regard him as an inherently "evil" -- which seemed to be the left's take. And it's a pretty dangerous one. Because when people identify with trump, if you call him evil, so to them. And the left's habit of just opposing whatever he says renders their side seemingly at least as callous as him: which is why so many polls believe trump understands their problems better than the other side.
I think it's more accurate to say trump is a complex individual who could, with the right social environment, express quite different politics. What I hadn't anticipated is that his social environment has become so radicalised, professionalised, and totalitarian. (As someone else put it: the last trump was "Jared's" and this one is Don Jr's. Trump, I think, can be both. That's over now.)
In any case, I think it's a moot point. I was wrong. This latent rage of the right against their cultural marginalisation is now a smokescreen for the totalising of the presidency. It's a real problem.
>Trump is very effective at selling people their grievances; at identifying problems
What are some of the problems he identified? Because his speeches just seem to tap into vague insecurities and the general claim things were better in the past
> How can you have been neutral on Trump until just now and then wrote that? This both-sides-ism looks a bit ludicrous.
Devil's advocate, I think it's easy if you don't directly feel impact from his policies. I've been losing my marbles about Trump at family dinners for a while, but for a chunk of my family he's a check against "radical" liberalism (read: gender ideology, spending money on things that don't serve everyday americans) and a path to lower tax bills.
Similarly, I think it's easy (from a conservative perspective) to dismiss all the seemingly emotional reactions to something Your Guy is saying because that's just politics; that's the expected behavior of politicians. It's not a problem if Your Guy is caught in a lie because they all do it.
I'm straw-manning a bit, but I'm just trying to sketch anecdotes of how I've seen otherwise rational, empathetic, intelligent people routinely offer (to me) unreasonably calm takes on Trump's activities and behavior.
One has to imagine it's vanishingly rare to hear anyone call for "the use of violence against non-combatants to achieve political or ideological aims" -- esp. by students at western universities.
Expressing support for groups who have taken such actions is not calling for terrorism -- almost every state in the world has engaged in terrorism. Plausibly the CIA's (recent) use of torture prisons and kiddnapping was in large part about terror for political ends. Yet one can express support for the US, and indeed, the CIA in other actions (eg., non-terror actions against military targets).
In very many cases today it seems "terrorism" is a political accusation that is used to suppress political expression and as a way around free speech and freedom of assembly laws. It is, I suppose, especially effective when acts of terrorism have recently been committed.
The state, in having the prerogative to decide who counts as a terrorist and therefore what kinds of speech count as "support for terrorism" thereby basically grants itself universal licence to suppress any kind of speech it dislikes.
In the case of israel/hamas, since gaza has been flattened by military bombardment and has no effective capability to resit or mount any opposition to israel -- speech in support of israel's enemies is particularly powerless. There's bascially nothing anyone can do, let alone as a student. So even to care, at all, what students in universities are saying shows this cannot plausibly be about the actual actions of hamas.
The most charitable interpretation of why this is happening is that pro-israeli students and civil society groups (perhaps often leftwing) are engaged in a moral panic about their peers who have turned against israel. And higher-ups in power are responding to this moral panic. The backfire effects from this on US society will be enormous --- lots of people will be asking, "why is the US state engaged in violent actions against students for the sake of another country who is in an extremely powerful position?". Knowing US citizens, I think this will rub a lot of people the wrong way, in the end.
All articles of this class, whether positive or negative, begin "I was working on a hobby project" or some variation thereof.
The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.
Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.
Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.
AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.
I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).
I think it also misses the way you can automate non-trivial tasks. For example, I am working on a project where there is tens of thousands of different data sets each with their own meta data and structure but the underlying data is mostly the same. But because the meta data and structure are all different, it’s really impossible to combine all this data into one big data set without a team of engineers going through each data set and meticulously restructuring and conforming said metadata to a new monolithic schema. However I don’t have any money to hire that team of engineers. But I can massage LLMs to do that work for me. These are ideal tasks for AI type algorithms to solve. It makes me quite excited for the future as many of these kind of tasks could be given to ai agents that would otherwise be impossible to do yourself.
I agree, but only for situations where the probabilistic nature is acceptable. It would be the same if you had a large team of humans doing the same work. Inevitably misclassifications would occur on an ongoing basis.
Compare this to the situation where you have a team develop schemas for your datasets which can be tested and verified, and fixed in the event of errors. You can't really "fix" an LLM or human agent in that way.
So I feel like traditionally computing excelled at many tasks that humans couldn't do - computers are crazy fast and don't make mistakes, as a rule. LLMs remove this speed and accuracy, becoming something more like scalable humans (their "intelligence" is debateable, but possibly a moving target - I've yet to see an LLM that I would trust more than a very junior developer). LLMs (and ML generally) will always have higher error margins, it's how they can do what they do.
Yes but i see it as multiple steps. Like perhaps the llm solution has some probabilistic issues that only get you 80% of the way there. But that probably already has given you some ideas how to better solve the problem. And this case the problem is somewhat intractable because of the size and complexity of the way the data is stored. So like in my example the first step is LLMs but the second step would be to use what they do as structure for building a deterministic pipeline. This is because the problem isn’t that there are ten thousand different meta data, but that the structure of those metadata are diffuse. The llm solution will first help identify the main points of what needs to be conformed to the monolithic schema. Then I will build more production ready and deterministic pipelines. At least that is the plan. I’ll write a substack about it eventually if this plan works haha.
I'm reminded of the game Factorio: Essentially the entire game loop is "Do a thing manually, then automate it, then do the higher-level thing the automation enables you to do manually, then automate that, etc etc"
So if you want to translate that, there is value in doing a processing step manually to learn how it works - but when you understood that, automation can actually benefit you, because only then are you even able to do larger, higher-level processing steps "manually", that would take an infeasible amount of time and energy otherwise.
Where I'd agree though is that you should never lose the basic understanding and transparency of the lower-level steps if you can avoid that in any way.
I've used Claude-Code & Roo-Code plenty of times with my hobby projects.
I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.
I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.
About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)
> I have not seen a single take by an experienced software engineer have a "sky is falling" take,
Let me save everybody some time:
1. They're not saying it because they don't want to think of themselves as obsolete.
2. You're not using AI right, programmers who do will take your job.
3. What model/version/prompt did you use? Works For Me.
But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.
My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.
People spending their days solving problems probably generally don't have much time to create science fiction.
The part before "But seriously" was sarcasm. I find it very odd to assume that a professional developer (even if it's not what they would describe as their field) is using it wrong. But it's a pretty standard reply to measured comments about LLMs.
> I find it very odd to assume that a professional developer (even if it's not what they would describe as their field) is using it wrong.
They're encountering a type of tool they haven't met before and haven't been trained to use. The default assumption is they are probably using it wrong. There isn't any reason to assume they're using it right - doing things wrong is the default state of humans.
> O'ahu, at least, is teaching us important lessons that can help protect other environments not yet so degraded
The entire article presumes that novelty is a degradation -- yet offers no evidence for it. So it's just an article of faith that whatever series of major evolutionary catastrophes led to to an ecology are morally or aesthetically preferable to those of human design and intention?
This disneyification of nature is a great stupidity. Nature is just a series of major crises, punctuated by periods with some novelty -- that this process should be preferable to any other, reads quite implausibly to me.
The entire article presumes that novelty is a degradation -- yet offers no evidence for it
Isn't the point rather that the novelty itself isn't the degradation, but the disappearance of the other native species as they get replaced by other species which already exist elsewhere, thereby decreasing overal number of individual species should the native ones go extinct? You could then argue that less species isn't a degradation because on a huge timescale that might not matter. However on a more 'current' timescale, I'm not sure how else to treat the man-made huge biodiversity loss other than a degradation.
While I get where you're coming from and agree to some extent, the cost of introducing new species is often the eradication of native species that can't compete. The moral argument is that those species deserve to be protected because they're valuable as is, while the utilitarian argument is that if native species die, we're losing access to genetic diversity that could be exploited now or in the future for medical treatments, innovations in science, etc.
They're being replaced by ecologies useful to humans, here a variety of crops and the like. I'm unconvinced that the land should exist for some specific species of mice and birds, but not for us.
All I really hear is that some very small group of aesthetically-minded human apes are precious about one more variety of bird, against the interest of very many other human apes that need to eat.
If the new ecology were really extremely desolate, we might weigh up this a little differently, sure. But the article's entire analysis is that these new ecologies are genuinely "natural" in the sense of self-sustaining, and varied, and so on.
No it's not. He gave you modal conditions on "understanding", he said: predicting the syntax of valid programs, and their operational semantics, ie., the behaviour of the computer as it runs.
I would go much further than this; but this is a de minimus criteria that the LLM already fails.
What zealots eventually discover is that they can hold their "fanatical proposition" fixed in the face of all opposition to the contrary, by tearing down the whole edifice of science, knowledge, and reality itself.
If you wish to assert, against any reasonable thought, that the sky is a pink dome you can do so -- first that our eyes are broken, and then, eventually that we live in some paranoid "philosophical abyss" carefully constructed to permit your paranoia.
This abursidty is exhausting, and I'd wish one day to find fanatics who'd realise it quickly and abate it -- but alas, I have never.
If you find yourself hollowing-out the meaning of words to the point of making no distinctions, denying reality to reality itself, and otherwise arriving at a "philosophical abyss" be aware that it is your cherished propositions which are the maddness and nothing else.
Here: no, the LLM does not understand. Yes, we do. It is your job to begin from reasonable premises and abduce reasonable theories. If you do not, you will not.
>No it's not. He gave you modal conditions on "understanding", he said: predicting the syntax of valid programs, and their operational semantics, ie., the behaviour of the computer as it runs.
LLMs are perfectly capable of predicting the behavior of programs. You don't have to take my word for it, you can test it yourself. So he gave modal conditions they already satisfy. Can I conclude they understand now ?
>If you find yourself hollowing-out the meaning of words to the point of making no distinctions, denying reality to reality itself, and otherwise arriving at a "philosophical abyss" be aware that it is your cherished propositions which are the maddness and nothing else.
The only people denying reality are those who insist that it is not 'real' understanding and yet cannot distinguish this 'real' from 'fake' property in any verifiable manner, the very definition of an invented property.
Your argument boils down to 'I think it's so absurd so it cannot be so'. That's the best you can do ? Do you genuinely think that's a remotely convincing argument ?
LLMs are reasonably competent at surfacing the behaviour of simple programs when the behaviour of those programs is a relatively straightforward structural extension of enough of its training set that it's managed to correlate together.
It's very clear that LLMs lack understanding when you use them for anything remotely sophisticated. I say this as someone who leverages them extensively on a daily basis - mostly for development. They're very powerful tools and I'm grateful for their existence and the force multiplier they represent.
Try to get one to act as a storyteller and the limitations in understanding glare out. You try to goad some creativity and character out of it and it spits out generally insipid recombinations of obvious tropes.
In programming, I use AI strictly as a auto-complete extension. Even in that limited context, the latest models make trivial mistakes in certain circumstances that reveal their lack of understanding. The ones that stand out are the circumstances where the local change to make is very obvious and simple, but the context of the code is something that the ML hasn't seen before.
In those cases, I see them slapping together code that's semantically wrong in the local context, but pattern matches well against the outer context.
It's very clear that the ML doesn't even have a SIMPLE understanding of the language semantics, despite having been trained on presumably multiple billions of lines of code from all sorts of different programming languages.
If you train a human against half a dozen programming languages, you can readily expect by the end of that training that they will, all by themselves, simply through mastering the individual languages, have constructed their own internal generalized models of programming languages as a whole, and would become aware of some semantic generalities. And if I had asked that human to make that same small completion for me, they would have gotten it right. They would have understood that the language semantics are a stronger implicit context compared to the surrounding syntax.
MLs just don't do that. They're very impressive tools, and they are a strong step forward toward some machine model of understanding (sophisticated pattern matching is likely a fundamental prerequisite for understanding), but ascribing understanding to them at this point is jumping the gun. They're not there yet.
>LLMs are reasonably competent at surfacing the behaviour of simple programs when the behaviour of those programs is a relatively straightforward structural extension of enough of its training set that it's managed to correlate together. It's very clear that LLMs lack understanding when you use them for anything remotely sophisticated.
No, because even those 'sophisticated' examples still get very non trivial attempts. If I were to use the same standard of understanding we ascribe to humans, I would rarely class LLMs as having no understanding of some topic. Understanding does not mean perfection or the absence of mistakes, except in fiction and our collective imaginations.
>Try to get one to act as a storyteller and the limitations in understanding glare out. You try to goad some creativity and character out of it and it spits out generally insipid recombinations of obvious tropes.
I do and creativity is not really the issue with some of the new SOTA. I mean i understand what you are saying - default prose often isn't great and every single model besides 2.5-pro cannot handle details/story instructions for longform writing without essentially collapsing but it's not really creativity that's the problem.
>The ones that stand out are the circumstances where the local change to make is very obvious and simple
Obvious and simple to you maybe but with auto-complete, the context the model actually has is dubious at best. It's not like copilot is pasting all the code in 10 files if you have 10 files open. What actually gets in in the context of auto-complete is fairly beyond your control with no way to see what is getting the cut and what isn't.
I don't use auto-complete very often. For me, it doesn't compare to pasting in relevant code myself and asking for what I want. We have very different experiences.
It's important not to so quickly cede the democractic ground here -- this isnt a democractic movement. It's a 49% election of a president, with 30% of the eligble voters, who collectively did not vote for the constitution to be suspended. They voted for a president, an office which exists by and within the framework of that constitution. There was no referendum on whether the constitution should be amended to allow for effectively unlimited presidental power.