> By that logic, we should skip the depleted uranium and head straight to thermonuclear weapons
Yes, actually.
(With a massive caveat being if the opponent does not also have nukes.)
I mean, why do you think the US nuked Japan at the end of WW2? Because it was the most expedient and economic way to kill enough people to break the government's will to fight and make them surrender.
The estimated losses for the invasion of their main islands were 1 million. Would you kill 1 million of your countrymen, some of those your relatives and neighbors or would you rather kill a couple hundred thousand civilians of the country that attacked you?
Ironically, this time the math works out even if you give each life the same value. If you give enemy lives lower value, how many of them would you be willing to nuke before you'd prefer to send your own people to die?
>I mean, why do you think the US nuked Japan at the end of WW2? Because it was the most expedient and economic way to kill enough people to break the government's will to fight and make them surrender.
Except that's not really true. The atomic bombs dropped on Hiroshima and Nagasaki had little to do with "ending the war more quickly"[0]:
"The Soviet invasion of Manchuria and other Japanese colonies began at midnight on August 8, sandwiched between the bombings of Hiroshima and Nagasaki. And it was, indeed, the death blow U.S. officials knew it would be. When asked, on August 10, why Japan had to surrender so quickly, Prime Minister Suzuki explained, Japan must surrender immediately or "the Soviet Union will take not only Manchuria, Korea, Karafuto, but also Hokkaido. This would destroy the foundation of Japan. We must end the war when we can deal with the United States."
As postwar U.S. intelligence reports made clear, the atomic bombs had little impact on the Japanese decision. The U.S. had been firebombing and wiping out Japanese cities since early March. Destruction reached 99.5 percent in the city of Toyama. Japanese leaders accepted that the U.S. could and would wipe out Japan's cities. It didn't make a big difference whether this was one plane and one bomb or hundreds of planes and thousands of bombs."
I've read this too but it doesn't disprove what US was thinking at the time.
People think others think like them. US being a democratic country and considering the value of a life to be high, I have no trouble believing that the US government did think the Japanese government would consider the cost of continued fighting to be too high.
> The "prompt and utter destruction" clause has been interpreted as a veiled warning about American possession of the atomic bomb[1]
We now largely know strategic bombing does not work [2] but it still doesn't stop some from trying now, it certainly did not back then.
That's not what US military leaders were saying then. Not saying that others weren't confused about that, but the US Military establishment knew what was up.
You hinted at it, and in my initial post included the statement that the atomic bombs (and especially the second -- Nagasaki -- bomb) were supposed to serve as a warning to the Soviets, not any attempt to limit casualties or shorten the war. However, I removed it because I couldn't find any direct quotes about it.
Then again, that's not something the US government would want publicized at that time, given that the USSR was their putative ally at that moment. As such, I'm not surprised that my cursory search didn't find any such quote from that period.
From the article I linked in my previous post[0]:
>General Dwight Eisenhower voiced his opposition at Potsdam. "The Japanese were already defeated," he told Secretary of War Henry Stimson, "and it wasn't necessary to hit them with that awful thing." Admiral William Leahy, President Harry Truman's chief of staff, said that the "Japanese were already defeated and ready to surrender….The use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan." General Douglas MacArthur said that the Japanese would have gladly surrendered as early as May if the U.S. had told them they could keep the emperor. Similar views were voiced by Admirals Chester Nimitz, Ernest King and William Halsey, and General Henry Arnold.
I left out this bit, again from the same link I shared previously[0]:
>U.S. and British intelligence officials, having broken Japanese codes early in the war, were well aware of Japanese desperation and the effect that Soviet intervention would have. On April 11, the Joint Intelligence Staff of the Joint Chiefs predicted, "If at any time the USSR should enter the war, all Japanese will realize that absolute defeat is inevitable." Japan's Supreme War Council confirmed that conclusion, declaring in May, "At the present moment, when Japan is waging a life-or-death struggle against the U.S. and Britain, Soviet entry into the war will deal a death blow to the Empire."
The emperor's surrender speech made direct reference to the atomic bombs.
Following the Hiroshima bombing on August 6, and the Soviet declaration of war and Nagasaki bombing on August 9, the Emperor's speech was broadcast at noon Japan Standard Time on August 15, 1945, and referred to the atomic bombs as a reason for the surrender.
"Furthermore, the enemy has begun to employ a new and cruel bomb, causing immense and indiscriminate destruction, the extent of which is beyond all estimation. Should we continue to fight, not only would it result in the ultimate collapse and obliteration of the Japanese nation, but it would also lead to the total extinction of human civilization."
This is the kind of implicit lying that seems pervasive today and I am so tired of it.
This alone is sufficient evidence of their malicious intent and should be enough to punish the people responsible for trying to ruin an innocent person's life.
But it's not gonna happen because the law is not written to punish people using it maliciously against others and most people simply won't care anyway.
I believe this behaviour is normalized in prosecution. Accusing someone or a crime? Raid their kitchen and bag every knife as a weapon and every household chemical as explosive precursors to get the jury on your side.
Think of organizations as a kind of AI. A prosecutorial organization can take on a so-called "paperclip maximizing" dysfunction just like a standard AI. Converts the whole world to paperclips.
The solution actually is to gate the specialist AI's through a generalist process. That's what court is supposed to be, but court is less effective in the modern world.
I really like this framing. It also reinforces my opinion that the thing most like the proverbial AI that turns the entire world into paperclips by far humans. It's a bit fascinating if you look at it from a psychological / mythopoeic point of view: are villains _always_ the evil part of ourselves, even when they're not human?
A do believe causing harm without justification should automatically result in punishment that causes the same harm to the abuser multiplied by a multiplicative constant but 10x is probably too much. Usually, I'd suggest something between 1.5 and 2.
He was facing 10 years IIRC, giving them 15 seems reasonable.
This constant should increase with repeated abuse so people who are habitual offenders get effectively removed from society.
Some countries already have something similar, like the 3 strikes law, but that has issues with discontinuity (the 3rd offense is sometimes punished too severely if minor). I'd prefer a continuous system, ideally one that is based on actual harm.
---
We also need mechanisms where civil servants (or anybody else, really) can challenge any law on the basis of being stupid. If the law is written so that it prohibits any amount (or an amount so small that it is harmless, even if he imported dozens of these samples), it is stupid and should be removed.
if a psycho run to stab someone, but a car blinks in his face as the knife is just about to hit his victim, causing him to miss and hit only the arm, why should he get a discount?
> This is the kind of implicit lying that seems pervasive today and I am so tired of it.
I am so tired of it, too. Toying with the legal boundary of lying in communication is pathological, maybe even sociopathic.
Everyone knows when someone is doing it, too. We just don’t have the means to punish it, even in the courts.
The whole “I won’t get punished so I’m doing all the immoral things” habit is foul to begin with. I don’t know how, but I hope our society can get over it. As things stand, there is no way to outlaw being an asshole.
There are glimmers of hope - like Wales trying to ban lying in politics. But of course, the punishment has to be proportional to the offense, not just a slap on the wrist.
If I wanted to take things to an extreme, I'd ask why laws even need to be so specific about which offenses lead to which punishments and which offenses are even punishable in the first place (the "what is not forbidden is allowed" principle).
In theory, you could cover them more generally by saying that any time someone intentionally causes harm to others (without a valid reason), he will be caused proportional harm in return. Then all you need is a conversion table to prison time, fines, etc.
With lying, all you would need to prove is that the person lied intentionally and quantify the expected harm which would have been caused if the lie was successful (regardless if it actually was or not - intent is what matters).
As a bonus, it would force everyone to acknowledge the full amount of harm caused. For example, rape usually leads to lifelong consequences for the victim but not the attacker. In this system, such inconsistency, some would call it injustice, would be obvious and it would be much easier for anyone to call for rectification.
"without a valid reason" is doing a lot of heavy lifting here. Not only would this idea be impractical and highly subjective, determining what a valid reason is, is the same problem as defining the Law in the first place.
Can you insult someone? Can you say something wrong that you thought was right ("the lion cage is locked") that someone is injured from? What is their duties in checking the info they get is correct? Is there a min wage or not? What value is it? Does it change on city or state? Can under-age people sign contracts? Can they vote?
I never said we didn't need rules, just that when they are too specific, people tend to follow the letter but break the spirit of the rule.
(Sidenote, one deeply ingrained idea is that the law is somehow special compared to other rules. The only real difference is that the law is enforced by violence while other rules are not.)
I was also talking about criminal law so the questions about minimum wage, contracts and voting are irrelevant regardless if you want specific or general rules about punishments.
You don't have to lie to tell a lie. The media have honed well this skill over decades.
"Coffee study found that it TRIPLES, your chance of developing a terrifying form of colon cancer! A 300% increase!"
In reality the study had a sample size of 10 and the odds were for an extremely rare form of lung cancer you have a 0.0003% chance of developing anyway. But now most readers go tell their co-workers "they did a study and found that coffee actually gives you colon cancer".
What I've noticed is that for a lot of people, if you do something wrong through a sufficient number of steps, they feel like the severity is lower.
The opposite is in fact true - causing harm through multiple steps shows intent and the severity is in fact higher.
If a journalist doesn't understand statistical significance, he is either incompetent or malicious. Either way he needs to be removed from his little position of power and if the incompetence is sufficient or the malice proven, he needs to be punished.
Apathetic voters who'll still vote for a terrible party just because they hate the same people the politicians say they do?
MIB put it so succinctly, large groups of humans are exceedingly dumb. It's almost like our individual intelligence drops, perhaps we evolved those effects from tribalism so that organising larger groups was more effective. And perhaps that effect is broken now that we organise in much larger groups than we ever evolved for.
"A bad plan now is better than a good plan later."
People have evolved to unify behind a strong (and aggressive) leader because historically the biggest threat to one's tribe (and therefore genes) were other tribes. You might not be in the right but it doesn't matter to evolution, what matters is that you kill the people trying to kill you, regardless of who started is.
This primitive drive is why every time the going gets tough, people elect charismatic and abusive leaders - because their lizard brain wants to fight an external enemy and abusers are good at giving people that enemy (Jews for Hitler, immigrants and gays and anybody who is slightly different for Trump, ...).
---
The issue is that for most of our evolution, such a leader could units hundreds, maybe thousands of people and if a tribe behaved aggressively and unjustly towards its neighbors, those neighbors would units against it and "keep it in check" (which is a euphemism for fighting and killing them).
But these days you have 3 superpowers, 2 of which are dictatorships and the 3rd is on track to become one. There is nobody to keep them in check.
> There are glimmers of hope - like Wales trying to ban lying in politics
Lol. Give me a break. This is like all the "combat disinformation" bullshit. You claim something is a lie or disinformation because your government appointed expert said so and jail someone. When years later it's undeniable that you were the one lying you said "we did the best with what we had at the Time".
Naive solutions only give more power to those in power and are abused routinely.
Obviously all available tools will by used by bad people. What we need is:
1) Good people to also use those tools - a lot of self-proclaimed good people think some tools are bad and therefore they won't use them. But tools are just tools, what makes it good or bad is who you use it against / for what reason.
A simple example is killing. Many people will have a knee-jerk reaction and say it's always bad. And they you start asking them questions and they begrudgingly admit that it's OK in self defense. And then you ask more questions and you come up a bunch of examples where logically it's the right tool to use but it's outside of the Overton window for them to admit it.
A good way to reveal people's true morality is movies. People will cheer for the good guys taking revenge, killing a rapist, overthrowing a corrupt government, etc. Because they naturally understand those things to be right, they've just been conditioned to not say it.
2) When bad people hurt someone using a tool, we need the tool to backfire when caught.
Obviously, to jail someone, the lying needs to be proven "beyond reasonable doubt" - i.e. Blackstone's ratio. Oh and no government appointed experts who get to dictate the truth. If the truth is not known with sufficient certainty, then neither side can be punished.
This threshold should be sufficient so that if it later turns out the person was not in fact lying, the trial is reevaluated and it will show that the prosecution manipulated evidence to manipulate the judge into believing the evidence was sufficient.
Alternatively, since incentives dictate how people play the game, we can decide that 10:1 is an acceptable error ratio and automatically punish prosecutors who have an error rate higher than that and jail them for the excess time.
So yes, if A jails B and it later turns out this was done through either sufficient incompetence or malice, then A should face the same punishment.
---
I am sure given more time, we can come up with less "naive" and more reliable systems. What we know for sure is that the current system is not working - polarization is rising, anti-social disorders are more common, inequality is rising, censorship in the west increased massively in the last few years, etc.
So either we come up with ways to reverse the trend or it will keep getting worse until it reaches some threshold above which society will rapidly and violently change (either more countries fall into authoritarianism or civil war erupts, neither of which is desirable).
Just because something doesn't work doesn't mean anything you propose will be better. That's how we get security theater or worse, the war on drugs.
> So either we come up with ways to reverse the trend or it will keep getting worse until it reaches some threshold above which society will rapidly and violently change (either more countries fall into authoritarianism or civil war erupts, neither of which is desirable
Bullshit. That's your thesis. But hey, if you want to start that violent revolution to overthrow the government do post about it here. I'm sure you'll be successful in this day and age.
You first act as if the current situation is the best we can do by pretending that no alternative can be better by implying that any alternative is naive.
I attempt to be reasonable and explain in good faith.
Yet then you admit the current situation doesn't work while at the same time continue acting as if a solution is impossible by pretending any attempt at a solution is worse without actually giving any specific criticisms.
On top, you:
1) (Probably intentionally) misrepresent what I said - I never said I wanted a violent revolution, I warn about it.
2) Mock me.
EDIT: Oh and I just noticed you attacked another commenter for absolutely no reason[0]. I would very much like to understand your goals because without further explanation, just going by your behavior here, they seem diametrically opposed to a better society for no valid reason.
I wish people would seriously consider (A)GPL for their projects more often. It hasn't happened here, though has certainly happened in the past without anyone knowing - (A)GPL would make it hard for them to make a closed source "fork".
In fact, I wish an even stronger license existed which allowed the original author to dictate who can build on top of the project to avoid exactly these kinds of situations where a powerful actor completely disempowers the authors while technically following the license (I assume MS will "fix" their error by fixing the licensing information but will continue to compete with Spegel with the intent to make it irrelevant).
What people who want such things really are after is the leverage to dictate a form of morality - if you dont have money, you are allowed to use the project for free, and give back advertising/clout. But if you have money, or could get a lot of money for said project, then they want their pay day.
Have you seen the license of llama models from Meta?
> 2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for Licensee,
or Licensee's affiliates, is greater than 700 million monthly active users in the
preceding calendar month, you must request a license from Meta...
> Interesting that "psychotic disorders" can be replaced with "tribalism" and still be true
That's because both are defects of rational reasoning.
I view it this way: The only difference is "disorders" are generally harmful to both the individual and society while "tribalism" is often beneficial to the individual (making allies, strenght in unity, status as you said, ...) even if it's harmful to society as a whole.
When _writing code_, you achieve a certain level of understanding because you fundamentally need to make all the decisions yourself (and some of them are informed by factual statements).
When _reading code_, a lot of those decision points are easy to miss or take for granted which means you don't even notice there are alternatives. Furthermore, you don't look up the factual statements, therefore you have a lower level of understanding. Also you have no opportunity to review if those statements are actually true so decisions made on false assumptions get into the codebase.
Finally, reviewing code (to the same level of depth) is significantly more mentally taxing.
To be honest the whole premise AI code needs to be banned sounds like a bit of a histrionic caricature to me, so I might not be in the right mindset to accept this either, but, this does feel a bit histrionic too. Like, maybe in a vacuum, in a codebase, language, and functionality I'm not familiar with, or if I'm too inexperienced to be diligent, or I don't bother with tests...maybe I'm just old, and those things seem like necessary preconditions even though I'd merrily ignore them 12 years ago, and working on my own now predisposes me to being happy with the work
"Easily"? No. Just check the article, it's plenty of examples of skilled programmers stopping to use A"I" because long term they discover more bugs than they would have written themselves. And you can even write the code yourself and have an LLM check it because it'll find tons of false positives.
LLM are fundamentally probabilistic text generators. We need and expect tools that have an error rate as low as the hardware error rate.
Whenever i open HN, there's a pro-A"I" post which makes me almost question if intellectual labor will be useful in a few years.
Yet when i try to use these tools, they fall short. They can probably be useful for generating half broken prototypes quickly because speed is more important that quality. But for real code, especially long term, they just never seem to end up breaking even.
This post somewhat restored my confidence in the world's sanity. There are still people who care about quality and who are willing to call bullshit bullshit.
EDIT: The most important part is that he links to tons of examples from other people. There's clearly a sizeable group of people who are not being heard often enough.
> Whenever i open HN, there's a pro-A"I" post which makes me almost question if intellectual labor will be useful in a few years.
HN is infested with grifters, sorry, entrepreneurs think that A"I" is their ticket to quick and easy riches
part of the confidence trick is keeping up the charade that A"I" is actually intelligent and not just a sophisticated bullshit generator
and if what you saw when you honestly tried using it last week was bad, then that's no longer state of the art, the latest model will fix everything, see, this latest benchmark they-definitely-didnt-cheat-on says so
I don't even see why "hackers" and "founders" should be related in any way. The hacker attitude (or approach to life) is that of curiosity and the need to build and create. The founder attitude AFAICT is how to get competent builders together and make money (on the positive end of the spectrum) or how to extract value from people instead of creating it (on the negative end of the spectrum).
I googled it and got "Hackers/Founders is a community of tech founders." I think that is sufficiently telling of what it's really about. It's another scam to "get those socially inept nerds to work for us and make us money without even realizing they are getting less than they deserve".
For me the biggest indicator of how much bullshit A"I" is, is how the A"I" tools themselves are bad. Visual Studio still doesn't have an "accept one suggested word" shortcut.
If your LLM is so great, why don't you make it add that feature, Microsoft?
This has been on my mind ever since I realized 2 things:
- the difference between zero-sum and positive-sum games
- that large parts of society are engaged in zero- (or even negative-) sum games 1) some through choice or 2) because they are forced to, to be able to compete with group 1)
Advertising, manipulation, misinformation, disinformation, and lying are all related phenomena with negative effects on both individuals and society.
Just like Wales is the first place to propose punishing lying, at least from some positions of power, I am looking forward to whatever country becoming the first to make advertising illegal, at least in some forms and scales.
Whenever I read about LLMs or try to use them, I feel like I am asleep in a dream where two contradicting things can be true at the same time.
On one hand, you have people claiming "AI" can now do SWE tasks which take humans 30 minutes or 2 hours and the time doubles every X months so by Y year, SW development will be completely automated.
On the other hand, you have people saying exactly what you are saying. Usually that LLMs have issues even with small tasks and that repeated/prolonged use generates tech debt even if they succeed on the small tasks.
These 2 views clearly can't both be true at the same time. My experience is the second category so I'd like to chalk up the first as marketing hype but it's confusing how many people who have seemingly nothing to gain from the hype contribute to it.
I'm not sure why this is confusing? We're seeing the phenomenon everywhere in culture lately. People WANT something to be true and try to speak it into existence. They also tend to be the people LEAST qualified to speak about the thing they are referencing. It's not marketing hype, it is propaganda.
Meanwhile, the 'experts' are saying something entirely different and being told they're wrong or worse, lying.
I'm sure you've seen it before, but this propaganda, in particular, is the holy grail of 'business people'. The ones who "have a great idea, just need you to do all the work" types. This has been going on since the late 70s, early 80s.
Not necessarily confusing but very frustrating. This is probably the first time I encountered such a wide range of opinions and therefore such a wide range of uncertainty in a topic close to me.
When a bunch of people very loudly and confidently say your profession, and something you're very good at, will become irrelevant in the next few years, it makes you pay attention. And when you then can't see what they claim to be seeing, then it makes you question whether something is wrong with you or them.
Totally get that; I'm on the older side, so personally I've been down this road quite a few times. We're ALWAYS on the verge of our profession being rugged somehow. RAD tools, Outsourcing, In-sourcing, No-Code, AI/LLM... I used to be curious about why there was overwhelming pressure to eliminate "us", but gave up and just focus on doing good work.
The pressure is simple - money. Competent people are rare and we're not cheap. But it turns out, those cheaper less competent people can't replace us, no matter what tools you give them - there is fundamental complexity to the work we do which they can't handle.
However, I think this time is qualitatively different. This time the rich people who wanna get rid of us are not trying to replace us with other people. This time, they are trying to simulate _us_ using machines. To make "us" faster, cheaper and scalable.
I don't think LLMs will lead to actual AI and their benefit is debatable. But so much money is going into the research that somebody might just manage to build actual AI and then what?
Hopefully, in 10 years we'll all be laughing at how a bunch of billionaires went bankrupt by trying to convince the world that autocomplete was AI. But if not, a whole bunch of people will be competing for a much smaller pool of jobs, making us all much, much poorer, while they will capture all the value that would have normally been produced by us right into their pockets.
> people claiming "AI" can now do SWE tasks which take humans 30 minutes or 2 hours
Yes people claim that but everyone with a grain of salt in his mind know this is not true. Yes, in some cases an LLM can write from scratch a python or web demo-like application and that looks impressive but it is still far from really replacing a SWE. Real world is messy and requires to be careful. It requires to plan, do some modifications, get some feedback, proceed or go back to the previous step, think about it again. Even when a change works you still need to go back to the previous step, double check, make improvements, remove stuff, fix errors, treat corner cases.
The LLM doesn't do this, it tries to do everything in one single step. Yes, even when it is in "thinking" mode, in thinks ahead and explore a few possibilities but it doesn't do several iterations as it would be needed in many cases. It does a first write like a brilliant programmers may do in one attempt but it doesn't review its work. The idea of feeding back the error to the LLM so that it will fix it works in simple cases but in most common cases, where things are more complex, that leads to catastrophes.
Also when dealing with legacy code it is much more difficult for an LLM because it has to cope with the existing code with all its idiosincracies. One need in this case a deep understanding of what the code is doing and some well-thought planning to modify it without breaking everything and the LLM is usually bad as that.
In short, LLM are a wonderful technology but they are not yet the silver bullet someone pretends it to be. Use it like an assistant to help you on specific tasks where the scope is small the the requirements well-defined, this is the ___domain where it does excel and is actually useful. You can also use it to give you a good starting point in a ___domain you are nor familiar or it can give you some good help when you are stuck on some problem. Attempt to give the LLM a stack to big or complex are doomed to failure and you will be frustrated and lose your time.
At first thought you are gonna talk about how various LLMs will gaslight you, and say something is true, then only change their mind once you provide a counter example and when challenged with it, will respond “I obviously meant it’s mostly true, in that specific case it’s false”.
The amount of scripting languages _for Rust_ is a symptom of how Rust fails to satisfy the need to write code with less strict requirements.
It makes perfect sense to use Rust as the main language for your application but have areas which are either in the prototype stage, need to be written quicker or which simply don't need the performance. But Rust does not offer a way to enter such a less strict context and such proposals keep getting shot down by the community, even when they are made from core members of the Rust team.
Contrast that with C# which has a dynamic keyword, allows enabling a checked context (not just in code where you can't miss it but also from csproj), has reflection, etc.
I really want Rust to succeed but sometimes the attitude borders on zealotry.
While I would like a Rust-like language that has those things, complaining that Rust, a compiled native language with no runtime whose closest competitor is C++, does not is a little strange to me.
Yes, it is very multi-paradigm and can be used in many domains, but it's not trying to be C# and it can't be C#. I would love to see Rust# as a language, but Rust itself cannot be that language.
Should new languages artificially restrict themselves based on the restrictions of their main competitor even if it's possible to serve a wider range of usecases?
Dynamic can be implemented in a compile-to-native lang. Contexts with different rules as well. Reflection support would likely have overhead which would need a granular opt in mechanism but is very likely possible.
Similarly many features rust is missing like compile time reflection, field in traits, ...
> Should new languages artificially restrict themselves based on the restrictions of their main competitor even if it's possible to serve a wider range of usecases?
Nope - and indeed, we're seeing Rust used in a more diverse set of applications than C++ (e.g. there are several Rust web frontend frameworks, it's a popular WASM language in general, etc)
However, Rust is targeting the same kind of general constraints as C++ for development and deployment, which means it can't add anything that would depend on a runtime or impose an undue burden on users. (Of course, C++ cheats in this regard - RTTI and exceptions - but Rust matches that, and doesn't go beyond.)
> Dynamic can be implemented in a compile-to-native lang.
Requires runtime functionality, and it's not really clear what the resulting semantics would be, anyway: aside from the usual type-safety concerns, how do you deal with lifetimes and other compile-time constraints?
> Contexts with different rules as well.
What kind of different rules? The problem is that any deviation from the rules needs to be reconciled at some point, and that reconciliation has to be watertight: you can't weaken the guarantees somewhere that interacts with safe Rust code, because the weakness you've introduced can spread. This is already a pretty significant issue with unsafe Rust.
Similarly, moving to a higher level of abstraction has similar issues: how do you reconcile the interactions of GC'd objects with the rest of Rust, which expects deterministic destruction and somewhat-predictable object lifetimes?
> Reflection support would likely have overhead which would need a granular opt in mechanism but is very likely possible.
> Similarly many features rust is missing like compile time reflection, field in traits, ...
I'll give you compile-time reflection; that would have been quite nice to have, but the Rust Foundation alienated the primary person with a plan (https://thephd.dev/i-am-no-longer-speaking-at-rustconf-2023), so who knows when we'll see the next proposal? I agree that it's a shame, but there's usually ways to work around it (proc macros can be used to patch over a lot of Rust's relative deficiencies)
In general, the borrow checker is the primary impediment to copying features from other languages; it's just generally non-trivial to fit them into the Rust paradigm without significant R&D. That's why I think a higher-level Rust would have to be a separate language, not an extension of Rust proper: resolving the collision of semantics between abstraction levels is just too difficult in the general case.
> I personally haven't found Rust that difficult to prototype in
Rust is actually a pretty nice language for prototyping IMO. I agree with your take - Rust has many escape hatches you can use to develop quickly. Then when it comes time to clean up, it's obvious where the deficiencies are (look for all the clones and unwraps, etc)
> I see the main benefit of these scripting languages as being able to write/run code at runtime
Thank you. Some commenters seem to think a scripting language somehow reveals some deficiency in the core language. Reality: not all code is available at compile time. Many applications need some way to inject code without recompiling.
Niko Matsakis made a proposal on his blog for opt-in contexts with relaxed rules (such as implicit conversions) and it was hated, at least on reddit. I doubt he would delete it but i can't find it now.
Maybe I'm a zealot, but I use Rust-script with cmd_lib. Rust-script lets you define the libraries at the top of the file (instead of in a Cargo file) and cmd_lib gives you macros to call commands directly almost like in Bash. Then you can iterate over the output way faster than in Bash. It recompiles after any changes, and subsequent runs just call the compiled executable. The downside is that the cache does grow, but it's not that noticeable.
I really like the idea of rust-script but last time I looked there didn't seem to be a good way to get rust-analyzer to work when writing a script. Maybe I'm a little too reliant on LSPs but I find writing Rust painful without it, has the situation improved at all since?
What are you rambling about, this is a sanboxed scripting language to allow your users to define customization at runtime. This has nothing to do with rust, it could be written in C or C++. You would not run random user provided C# in your application at runtime.
It is like saying browser should be coded in C# because C++ can't be use instead of JavaScript...
Since it's written in Rust, that's the easiest place to use the embedding API. https://koto.dev/docs/0.15/api/ I imagine one _could_ use it from C, but it wouldn't be as ergonomic as Lua's C API. And Lua in turn isn't a perfect match for embedding in Rust
I think it's just a convenient and popular language to experiment with.
For production systems, people just use Python + Rust when needing a balance between dynamism and strictness. The tooling to mix them is very mature and the communities are overlapping.
With uv becoming the defacto packaging solution for python, I expect the border to blur even more in the future.
This is already available as an optional Lua target in mlua [0]. I recently built a programmable server for Server-Sent Events scriptable with Lua [1]. I chose Lua 5.4, but it's trivial to switch it to LuaJIT, or really any other Lua derivative including Roblox Luau. It's just a matter of enabling the mlua feature you want.
Yes, actually.
(With a massive caveat being if the opponent does not also have nukes.)
I mean, why do you think the US nuked Japan at the end of WW2? Because it was the most expedient and economic way to kill enough people to break the government's will to fight and make them surrender.
The estimated losses for the invasion of their main islands were 1 million. Would you kill 1 million of your countrymen, some of those your relatives and neighbors or would you rather kill a couple hundred thousand civilians of the country that attacked you?
Ironically, this time the math works out even if you give each life the same value. If you give enemy lives lower value, how many of them would you be willing to nuke before you'd prefer to send your own people to die?
reply