Obviously yes. For the same reason it's acceptable that myvec[i] panics (it will panic if i is out of bounds - but you already figured out that i is in bounds) and a / b panic for a and b integers (it will panic if b is zero, but if your code is not buggy you already tested if b is zero prior to dividing right?)
Panic is absolutely fine for bugs, and it's indeed what should happen when code is buggy. That's because buggy code can make absolutely no guarantees on whether it is okay to continue (arbitrary data structures may be corrupted for instance)
Indeed it's hard to "treat an error" when the error means code is buggy. Because you can rarely do anything meaningful about that.
This is of course a problem for code that can't be interrupted.. which include the Linux kernel (they note the bug, but continue anyway) and embedded systems.
Note that if panic=unwind you have the opportunity to catch the panic. This is usually done by systems that process multiple unrelated requests in the same program: in this case it's okay if only one such request will be aborted (in HTTP, it would return a 5xx error), provided you manually verify that no data structure shared by requests would possibly get corrupted. If you do one thread per request, Rust does this automatically; if you have a smaller threadpool with an async runtime, then the runtime need to catch panics for this to work.
> Note that if panic=unwind you have the opportunity to catch the panic.
And now your language has exceptions - which break control flow and make reasoning about a program very difficult - and hard to optimize for a compiler.
That's also a peeve of mine. Is there a way at all for grub to use hardware acceleration there? Or maybe the bootloader isn't allowed to do such things
Yes - use newer libcrypto. They are in the process of switching, but it just takes very long. I don't see why bootloader won't be allowed to use the CPU features that accelerate decryption.
Nice! Do you have a link with the progress of this? Maybe in a mailing list or something. I can't manage to find it
Also, do you know whether grub plans to support luks2?
And maybe even veracrypt - ok this one is unlikely. (cryptsetup can read veracrypt just fine and the Linux kernel copes with it, maybe it's a matter of porting this code to grub? One issue is that grub would need to embed the number of iterations of the key derivation function somehow - the thing veracrypt calls PIM - because unlike luks, veracrypt doesn't store it in a header that can be read before unencrypting)
You know that Trump administration is paying millions of dollars to imprison some hundred persons there without due process, right? And is looking into expanding this right now:
That's only for images coming directly from a camera. If the images were generated in another way, the idea that a pixel is a little square is sometimes ok (example, pixel art)
#[kani::proof_for_contract(NonNull::new_unchecked)]
pub fn non_null_check_new_unchecked() {
let raw_ptr = kani::any::<usize>() as *mut i32;
unsafe {
let _ = NonNull::new_unchecked(raw_ptr);
}
}
It looks like a test, but actually it is testing that every possible usize, when converted to a pointer to i32 and built with NonNull::new_unchecked, will follow the contract of NonNull::new_unchecked, which is defined here
Which means: if the caller guarantees that the parameter ptr is not null, then result.as_ptr() is the same as the passed ptr
That's a kind of trivial contract but Kani tests for all possible pointers (rather than some cherry picked pointers like the null pointer and something else), without actually brute-forcing them but instead recognizing when many inputs test the same thing (while still catching a bug if the code changes to handle some input differently). And this approach scales for non-trivial properties too, a lot of things in the stdlib have non-trivial invariants.
It's not that different from writing a regular test, it's just more powerful. And you can even use this #[requires] and #[ensures] syntax to test properties in regular tests if you use the https://crates.io/crates/contracts crate.
Really if you have ever used the https://proptest-rs.github.io/proptest/intro.html or the https://crates.io/crates/quickcheck crate, software verification is like writing a property test, but rather than testing N examples generated at random, it tests all possible examples at once. And it works when the space of possible examples is infinite or prohibitively large, too.
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
> All this talk about "alignment", when applied to actual sentient beings, is just slavery.
I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.
I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)
But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore
Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI?
Society is inherently a prisoners dilemma, and you are biased to prefer your captors.
We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.
You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?
I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.
> Maybe they will have legal personhood some day. Maybe they will be our heirs.
Hopefully that will never come to pass. it means total failure of humans as a species.
> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(
I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.
So yeah, AGI can help us avoid that future.
> Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.
Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.
The genre of sci-fi was a mistake. It appears to have had no other lasting effect than to stunt the imaginations of a generation into believing that the only possible futures for humanity are that which were written about by some dead guys in the 50s (if we discount the other lasting effect of giving totalitarians an inspirational manual for inescapable technoslavery).
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.
Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.
Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]
If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do
So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator
It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.
Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.
Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has
Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo
> Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”
There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.
I mean we just fundamentally have different definitions of AGI. Mine's based on outcomes and what it can do, so purely goal based. Not the processes that mimic humans or animals
I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems
I’m not sure how you can argue something is a general intelligence if it can’t do those kinds of things? Comes out of the factory with a command: “Operate this android for a lifetime pretending to be human.”
Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.
Because I see intelligence as the ability to produce effective actions towards a goal. A more intelligent chess AI beats a less intelligent one by making better moves towards the goal of winning the game
The G in AGI is being able to generalize that intelligence across domains, including those its never seen before, as a human could
So I would fully expect an advanced AGI to be able to pretend to be a human. It has a model of the world, knows how humans act, and could move the android in a human like manner, speak like a human, and learn the skills a human could
Is it conscious or feeling though? Or following the same processes that a human does? That's not necessary. Birds and planes both fly, but they're clearly different things. We (probably) don't need to simulate the brain to create this kind of intelligence
Lets pinch this AGI to test if it 'feels pain'
<Thinking>
Okay, I see that I have received a sharp pinch at 55,77,3 - the elbow region
My goal is to act like a human. In this situation a human would likely exhibit a pain response
A pain response for humans usually involves a facial expression and often a verbal acknowledgement
Humans normally respond quite slow, so I should wait 50ms to react
"Hey! Why did you do that? That hurt!"
...Is that thing human? I bet it'll convince most of the world it is.. and that's terrifying
You’re falling into the “Ginger’s don’t have souls” trap I just spoke of.
We don’t define humans as individuals components so your toe isn’t you, but by that same token your car isn’t you either. If some sub component of a system is emulating a human consciousness then we don’t need to talk about the larger system here.
AGI must be able to do these things, but it doesn’t need to have human mental architecture. Something that can simulate physics well enough could emulate an all the atomic scale interactions in a human brain for example. That virtual human brain would then experience everything we did even if the system running the simulation didn’t.
Something can’t “Operate this cat Android, pretending to be a cat.” if it can’t do what I described.
A single general intelligence needs to be able to fly an aircraft, get a degree, run a business, and raise a baby to adulthood just like a person or it’s not general.
Only to the extent of having specialized bespoke solutions. We have hardware to fly a plane, but that same hardware isn't able to throw a mortarboard in the air after receiving its degree, and the hardware that can do that isn't able to lactate for a young child.
General intelligence is easy compared to general physicality. And, of course, if you keep the hardware specialized to make its creation more tractable, what do you need general intelligence for? Special intelligence that matches the special hardware will work just as well.
Flying an aircraft requires talking to air traffic control which existing systems can’t do. Though obviously not a huge issue when the aircraft already has radios, except all those FAA regulations apply to every single aircraft you’ve retrofitting.
The advantage of general intelligence is using a small set of hardware now lets you tackle a huge range of tasks or in the above aircraft types. We can mix speakers, eyes, and hands to do a vast array of tasks. Needing new hardware and software for every task very quickly becomes prohibitive.
The advantage of general intelligence is that it can fly you home to the nearest airport, drive you the last mile, and, once home, cook you supper. But for that you need the hardware to be equally general.
If you need to retrofit airplanes and in such a way that the hardware is specific to flying, no need for general intelligence. Special intelligence will work just as well. Multimodal AI isn't AGI.
No, the advantage of AGI isn’t being able to do all those physical things, the advantage of AGI is you don’t need to keep building new software for every task.
Let’s suppose you wanted to replace a pilot for a 747, now you need to be able fly, land, etc which we’re already capable of. However, actual job of a pilot goes well past just flying.
You also need to do the preflight such as verifying fuel is appropriately for the trip, check weather, alternate landing spots, preflight walk around the aircraft etc etc. It also needs to be able to keep up with any changing procedures as a special purpose softener you’re talking a multi billion dollar investment, or have an AGI run through the normal pilot training and certification process for a trivial fraction of those costs.
> the advantage of AGI is you don’t need to keep building new software for every task.
Even the human's brain seems to be 'built' for its body. You're moving into ASI realm if the software can configure itself for the body automatically.
> That’s the promise of AGI.
That's the promise of multimodal AI. AGI requires general ability – meaning basically able to do anything humans can – which requires a body as capable as a human's body.
Human brains aren’t limited to the standard human body plan. People born with an extra finger have no issues operating that finger just as well as people with the normal complement of fingers. Animal experiments have pushed this quite far.
If your AI has an issue because the robot has a different body plan, then no it’s not AGI. That doesn’t mean it needs to be able to watch every camera in a city at the same time, but you can use multiple AGI’s.
> Human brains aren’t limited to the standard human body plan.
But as the body starts to lose function (i.e. disability), we start to consider those humans special intelligences instead of general intelligences. The body and mind are intrinsically linked.
Best we can tell the human brain is bootstrapped to work with the human body with specialized functions, notably functions to keep it alive. It can go beyond those predefined behaviours, but not beyond its own self. If you placed the brain in an entirely different body, that which it doesn't recognize, it would quickly die.
As that pertains to artificial analogs, that means you can't just throw AGI at your hardware and see it function. You still need to manually prepare the bulk of the foundational software, contrary to the promise you envision. The generality of AGI is limited to how general its hardware is. If the hardware is specialized, the intelligence will be beholden to being specialized as well.
There is a hypothetical world where you can throw intelligence at any random hardware and watch it go, realizing the promise, but we call that ASI.
> As that pertains to artificial analogs, that means you can't just throw AGI at your hardware and see it function.
There’s a logical contradiction in saying AGI is incapable of being trained to do some function. It might take several to operate a sufficiently complex bit of hardware, but each individual function must be within the capability of an AGI.
> but we call that ASI
No ASI is about superhuman capabilities especially things like working memory and recursive self improvement. AGI capable of human level control of arbitrary platforms isn’t ASI. Conversely you can have an ASI stuck on a supercomputer cluster using wetware etc, that does qualify even if it can’t be loaded into a drone.
AGI on the other hand is about moving throughout wildly different tasks from real time image processing to answering phone calls. If there’s some aspect of operating a hardware platform an AI can’t do then it’s not AGI.
Excel and Powerpoint are not conscious and so there is not reason to expect any other computation inside a digital computer to be different.
You may say something similar for matter and human minds, but we have a very limited and incomplete understanding of the brain and possibly even of the universe. Furthermore we do have a subjective experience of consciousness.
On the other hand we have a complete understanding of how LLM inference ultimately maps to matrix multiplications which map to discrete instructions and how those execute on hardware.
The whole subreddit is moderated poorly. I’ve seen plenty of users post on r/LocalLlama about how something negative or constructive they said on the Cursor sub was just removed.
Firefox employs processes for sandboxing but for small components they are not worth the overhead. For those they employed this curious idea: first compile the potentially unsafe code to wasm (any other VM would work), then compile the wasm code to C (using the wasm2c tool). Then use this new C source normally in your program.
All UB in the original code becomes logical bugs in the wasm, that can output incorrect values but not corrupt memory or do things that UB can do. Firefox does this to encapsulate C code, but it can be done with Rust too
Note that the reason why this works for sandboxing is that wasm code gets its own linear memory that is bounds-checked. Meaning that the generated C code will contain those checks as well, with the corresponding performance implications.
Panic is absolutely fine for bugs, and it's indeed what should happen when code is buggy. That's because buggy code can make absolutely no guarantees on whether it is okay to continue (arbitrary data structures may be corrupted for instance)
Indeed it's hard to "treat an error" when the error means code is buggy. Because you can rarely do anything meaningful about that.
This is of course a problem for code that can't be interrupted.. which include the Linux kernel (they note the bug, but continue anyway) and embedded systems.
Note that if panic=unwind you have the opportunity to catch the panic. This is usually done by systems that process multiple unrelated requests in the same program: in this case it's okay if only one such request will be aborted (in HTTP, it would return a 5xx error), provided you manually verify that no data structure shared by requests would possibly get corrupted. If you do one thread per request, Rust does this automatically; if you have a smaller threadpool with an async runtime, then the runtime need to catch panics for this to work.
reply