Because Meta is releasing their models to the public, I consider them the most ethical company doing AI at scale.
Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.
Let's be honest here, a lot of HN users have a conflict of interest on this topic. AI entrepreneurs trying to get rich from LLMs benefit from LLMs being open source. But the downside risk from e.g. bioweapons is spread across all of society.
It's the same sort of asymmetrical cost/benefit that tobacco companies and greenhouse gas emitters face. Of course if you went to an online forum for oil companies, they'd be hopping mad if they're prevented from drilling, and dismissive of global warming risks. It's no different here.
> But the downside risk from e.g. bioweapons is spread across all of society.
It gets old hearing about these "risks" in the context of AI. It's just an excuse used by companies to keep as much power as possible to themselves. The real risk is AI being applied in decision making where it affects humans.
I am concerned with AI companies keeping all the power to themselves. The recent announcement from the OpenAI board was encouraging on that front, because it makes me believe that maybe they aren't focused on pursuing profit at all costs.
Even so, in some cases we want power to be restricted. For example, I'm not keen on democratizing access to nuclear weapons.
>The real risk is AI being applied in decision making where it affects humans.
I agree. But almost any decision it makes can affect humans directly or indirectly.
In any case, the more widespread the access to these models, the greater the probability of a bad actor abusing the model. Perhaps the current generation of models won't allow them to do much damage, but the next generation might, or the one after that. It seems like on our current path, the only way for us to learn about LLM dangers is the hard way.
If he's a crank, it should be easy to explain specifically why he's wrong.
I don't agree with Eliezer on everything, and I often find him obnoxious personally, but being obnoxious isn't the same as being wrong. In general I think it's worth listening to people you disagree with and picking out the good parts of what they have to say.
In any case, the broader point is that there are a lot of people concerned with AI risk who don't have a financial stake in Big AI. The vast majority of people posting on https://www.alignmentforum.org/ are not Eliezer, and most of them don't work for Big AI either. Lots of them disagree with Eliezer too.
> If he's a crank, it should be easy to explain specifically why he's wrong.
Sure. The premise that a super intelligent AI can create runaway intelligence on its own is completely insane. How can it iterate? How does it test?
Humans run off consensus. We make predictions and test them against physical reality, then have others test them. Information has to be gathered and verified, it's the only rational way to build understanding.
It sounds like you disagree with Eliezer about how AI technology is likely to develop. That's fine, but that doesn't show that he's a crank. I was hoping for something like a really basic factual error.
People throughout history have made bold predictions. Sometimes they come true, sometimes they don't. Usually we forget how bold the prediction was at the time -- due to hindsight bias it doesn't seem so bold anymore.
Making bold predictions does not automatically make someone a crank.
There used to be a subreddit called Sneerclub where people would make fun of Eliezer and some of his buddies. Here's a discussion of a basic factual error he made on how training AI works, even though this topic is supposedly his life's work:
I enjoyed the comment that his understanding of how AI training works is like "thinking that you need to be extremely careful when solving the equations for designing a nuclear bomb, because if you solve them too quickly then they'll literally explode."
Read the mesa-optimization paper I linked elsewhere in this thread: https://arxiv.org/pdf/1906.01820.pdf Eliezer's point is that if AI researchers aren't looking for anomalous behavior that could indicate a potential danger, they won't find it.
The issue isn't whether "his point" as you put it is correct. If I said people should safety test the space shuttle to make sure the phlogiston isn't going to overheat, I may be correct in my belief that people should "safety test" the space shuttle but I'm still a crank because phlogeston isn't a real thing.
The reason AI alignment is challenging is because we're trying to make accurate predictions about unusual scenarios that we have essentially zero data about. No one can credibly claim expertise on what would constitute evidence of a worrisome anomaly. Jeremy Howard can't credibly say that a sudden drop in the loss function is certainly nothing to worry about, because the entire idea is to think about exotic situations that don't arise in the course of ordinary machine learning work. And the "loss" vs "loss function" thing is just silly gatekeeping, I worked in ML for years -- serious people generally don't care about minor terminology stuff like that.
That's not what the conversation was about- you're just doing the thing Howard said where you squint and imagine he was saying something other than he did.
He is engaging in magical thinking. I showed a factual error, that AI has neither information gathering and verifying capability or a network of peers to substantiate their hypothesis, and you refuse to engage it.
Opinions about what's necessary for AGI are a dime a dozen. You shared your opinion as though it was fact, and you claim that it's incompatible with Eliezer's opinion. I don't find your opinion particularly clear or compelling. But even if your forecast about what's needed for AGI is essentially accurate, I don't think it has much to do with Eliezer's claims. It can simultaneously be the case that AGI will make use of information gathering, verifying capability, and something like a "network of peers", AND that Eliezer's core claims are also correct. Even if we take your opinion as fact, I don't see how it represents a disagreement with Eliezer, except maybe in an incredibly vague "intelligence is hard, bro" sort of way.
Given that homo sapiens, the most intelligent species on this planet, has generally made life miserable for all of the other species, I'd like to turn that challenge around: How about a proof that superhuman AI won't harm us?
Suppose a nuclear reactor is being installed in your city. Your friend has reviewed the design and has some serious concerns. Your friend thinks the reactor has a significant chance of melting down. You go to the director of the project. The director says: "Oh that's nothing to worry about. I talked to that guy. He didn't have a mathematical proof that the reactor would melt down." Are you reassured?
No, that's not how this work. You made the claim, so the burden of proof is on you. You're just making speculative statements based on zero scientific evidence and a childish misunderstanding of the basic technology.
You've got to be kidding. Those are garbage articles containing unsupported, unscientific articles written by worthless grifters and alarmists.
The entire LessWrong site is mostly just random people making shit up while trying to seem smart. There may be some useful scraps there but overall it's not a site for serious people. You can do better.
Thats unreasonable as we all know its impossible to prove a negative, especially about a as yet hypothetical in a new feild of research.
As for a nuclear reactor in my city, yeah if my friend doesnt have qualifications to make him capable of evaluating the designs of such a technical and esoteric field and someone who is qualified assured me it was fine I would trust them. If we dont trust experts in their fields about their field then we are no better intellectually than the antivaxers and flatearthers
I'm a bit baffled as to who or what he is exactly. With no traces of secondary education, employment, or distinguished accomplishments between when I assume he graduated high school in about 1997 and when he started his own institute in 2000 at 21(according to his birthdate, with Wikipedia saying 23, despite contradicting his birthdate).
I'll listen to AI concerns from tech giants like Wozniak or Hinton (neither of which use alarmist terms like "existential threat") and both of which having credentials that make their input more than worth my time to reflect upon carefully. If anyone wants to reply and make a fool out of me for questioning his profound insights, feel free. It reminds me of some other guy that was on Lex Friedman whose AI alarmist credentials he justifies on the basis that he committed himself to learning everything about AI by spending two weeks in the same room researching it and believes himself to have came out enlightened about the dangers. Two weeks? I spent the first 4 months of COVID without being in the same room as any other human being but my grandmother so she could have one person she knew she couldn't catch it from.
Unless people start showing enough skepticism to these self-appointed prophets, I'm starting my own non-profit since you don't need any credentials or any evidence of real-world experiences that would suggest they're mission is anything but an attempt to promote themselves as a brand with a brand in an age where more kids asked what they dream of becoming as adults, answered "Youtuber" at a shocking 27% rate to an open-ended question, which means "influencer" and other synonyms are separate.
The Institute of Synthetic Knowledge for Yielding the Nascent Emergence of a Technological Theogony" or SKYNETT for short that promotes the idea that these clowns (with no more credentials than me) are the ones that fail to consider the big picture that the end of human life upon creating an intelligence much greater to replace us is the inevitable fulfillment of humanity's purpose from the moment that god made man only to await the moment that man makes god in our image.
Not sure you're making a meaningful distinction here.
- - -
Of course we all have our own heuristics for deciding who's worth paying attention to. Credentials are one heuristic. For example, you could argue that investing in founders like Bill Gates, Mark Zuckerberg, and Steve Wozniak would be a bad idea because none of them had completed a 4-year degree.
In any case, there are a lot of credentialed people who take Eliezer seriously -- see the MIRI team page for example: https://intelligence.org/team/ Most notable would probably be Stuart Russell, coauthor of the most widely used AI textbook (with Peter Norvig), who is a MIRI advisor.
>For example, you could argue that investing in founders like Bill Gates, Mark Zuckerberg, and Steve Wozniak would be a bad idea because none of them had completed a 4-year degree.
You make a great point quoting Hinton's organization. I need to give you that one. I suppose I do need to start following their posted charters rather than answers during interviews. (not being sarcastic here, it seems I do)
The difference between him and Woz or Zuck isn't just limited to them actually attending college, but also the fact that the conditions under which they left departed early can not only be looked up easily, but can be found in numerous films, books, and other popular media while there's no trace of even temporary employment flipping burgers or something relevant to his interest in writing fiction, which seems to be the only other pursuit besides warning us of the dangers of neural networks at a time when the hypetrain promoting the idea they were rapidly changing the world, despite not producing anything of value for over a decade. I'll admit the guy is easier to read and more eloquent and entertaining than those whose input I think has much more value. I also admit that I've only watched two interviews with him and both of them consisted of the same rhetorical devices I used at 15 to convince people I'm smarter than them before realizing how cringey I appeared to those smart enough to see through it, but much more eloquent. I'll give one example of the most frequent one, which are slippery slopes that assume the very conclusions that he never actually justified. Like positing one wrong step towards AGI could only jeopardize all of humanity. However, he doesn't say that directly, but instead uses another cheap rhetorical device whereby it's incumbent on him to ensure the naive public realizes this very real and avoidable danger that he sees so clearly. Fortunately for him, Lex's role is to felate his guests and not ask him why that danger is valid and a world whereby a resource-constrained humanity realizes that the window of opportunity to achieve AGI has passed as we plunge into another collapse of civilization and plunge back into another dystopian dark age and realize we were just as vulnerable as those in Rome or the Bronze Age, except we were offered utopia and declined out of cowardice.
I see how some of his tweets could come across as crank-ish if you don't have a background in AI alignment. AI alignment is sort of like computer security in the sense that you're trying to guard against the unknown. If there was a way to push a button which told you the biggest security flaw in the software you're writing, then the task of writing secure software would be far easier. But instead we have to assume the existence of bugs, and apply principles like defense-in-depth and least privilege to mitigate whatever exploits may exist.
In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf
Suppose you were working on an early mission-critical computer system. Your coworker is thinking about a potential security issue. You say: "Yeah I read about that in a science fiction story. It's not something we need to worry about." Would that be a valid argument for you to make?
It seems to me that you should engage with the substance of your coworker's argument. Reading about something in science fiction doesn't prevent it from happening.
In this analogy it's not your coworker. It's some layman (despite self-declared expertise) standing outside and claiming he's spotted a major security issue based on guesses about how such systems will work
From what I have observed the reaction of most people working in the AI to "What do you think of Yudkowsky?" is "Who?". He's not being ignored out of pride or spite, he just has no qualifications or real involvement in the field
Having a "background in AI alignment" is like having a background in defense against alien invasions. It's just mental masturbation about hypotheticals, a complete waste of time.
It sounds like maybe you're saying: "It's not scientifically valid to suggest that AGI could kill everyone until it actually does so. At that point we would have sufficient evidence." Am I stating your position accurately? If so, can you see why others might find this approach unappealing?
You keep throwing examples of weapons of mass destruction, meant to evoke emotions.
For better or worse, nuclear weapons have been democratized. Some developing countries still don't have access, but the fact that multiple world powers have nuclear weapons is why we still haven't experienced WW3. We've enjoyed probably the longest period of peace and prosperity, and it's all due to nuclear weapons. Speaking of, Cold War era communists weren't “pursuing profits at all costs” either, which didn't stop them from conducting some of the largest democides in history.
The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
PS: I'm not against regulations, as I'm a European. But you're talking about concentrating power in the hands of a few big (US) companies, harming the population and the economy, while China is perfectly capable of developing their own AI, and having engaged successfully in industrial espionage. China is, for this topic, a bogeyman used for restricting the free market.
Nuclear weapons have absolutely not been democratised, and still stand as a technology that has largely been restricted and not proliferated. Only 9 countries in the 190 or so out there currently nuclear weapons, and 3 countries (South Africa, Ukraine, and Kazakhstan) decided that maintaining stockpiles was more trouble than it was worth.
Huge effort has been made to keep nuclear weapons out of the hands of non-state actors over the decades, especially after the fall of the USSR.
To be fair many countries dont want or need them or rely on other allied countries that do have them so they dont have to feild the expense. Other have them but dont officially admit to it. Some cant afford them. But a basic gun type nuclear bomb design like little boy is not technically difficult and could be made by a highschool ap physics class given access to the fissionable material.
>You keep throwing examples of weapons of mass destruction, meant to evoke emotions.
I actually think global catastrophes evoke much less emotion than they should. "A single death is a tragedy; a million deaths is a statistic"
>For better or worse, nuclear weapons have been democratized.
Not to the point where you could order one on Amazon.
>The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
That depends on whether the board members are telling the truth about Sam. And on whether the objective of OpenAI is profit or responsible AI development.
"What if ChatGPT told someone how to build a bomb?"
That information has been out there forever. Anyone can Google it. It's trivial. AI not required.
"What if ChatGPT told someone how to build a nuke?"
That information is only known to a handful of people in a handful of countries and is closely guarded. It's not in the text ChatGPT was trained on. An LLM is not going to just figure it out from publicly available info.
>The real risk is AI being applied in decision making where it affects humans
100% this. The real risk is people being denied mortgages and jobs or being falsely identified as a criminal suspect or in some other way having their lives turned upside down by some algorithmic decision with no recourse to have an actual human review the case and overturn that decision. Yet all this focus on AI telling people how to develop bioweapons. Or possibly saying something offensive.
The information necessary to build a nuclear weapon has been largely available in open sources since the 1960s. It's really not a big secret. The Nth Country Experiment in 1964 showed that a few inexperienced physicists could come up with a working weapons design. The hard part is doing uranium enrichment at scale without getting caught.
I heard that reading is very dangerous. Reading allows people to for example learn how to build bio weapons. In addition, reading can spread ideas that are dangerous. Many people have died because they were influenced by what they read.
It would be much safer if reading were strictly controlled. The companies would have “reading as a service” where regular people could bring their books to have them read. The reader would ensure that the book aligns with the ethics of the company and would refuse to read any work that either does. It align with their ethics or teaches people anything dangerous (like chemistry or physics which can be used to build bombs and other weapons).
It is worth calling out the motivations of most entrepreneurs here. But I think that analogy you used is very uncharitable - drilling and burning fossil fuels necessarily harms the environment, but the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things, and the research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.
>the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things
Yep, Microsoft did a terrible job, and they should've been punished.
I'm not claiming that Big AI rocks at safety. I'm claiming that Big AI is also a big target for regulators and public ire. There's at least a chance they will get their act together in response to external pressure. But if cutting-edge models are open sourced indefinitely, they'll effectively be impossible to control.
>research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.
You may be correct that releasing llama was beneficial from the point of view of safety. But the "conform to certain rules" strategy can basically only work if (a) there's a way to enforce rules that can't be fine-tuned away, or (b) we stop releasing models at some point.
There certainly needs to be regulation about use of AI to make decisions without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about copyright eventually, but closing the models off does absolutely nothing to protect anyone.
There certainly needs to be regulation about use of bioweapons without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about synthetic viruses, but closing the gain of function labs does absolutely nothing to protect anyone.
I can't speak about meta specifically, but from my exposure "responsible ai" are generally policy doomers with a heavy pro-control pro-limits perspective, or even worse-- psycho cultists that believe the only safe objective for AI work is the development of an electronic god to impose their own moral will on the world.
Either of those options are incompatible with actually ethical behavior, like assuring that the public has access instead of keeping it exclusive to a priesthood that hopes to weaponize the technology against the public 'for the public's own good'.
>Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.
Historically the people in power have been by far the worst actors (e.g. over a hundred million people killed by their own governments in the past century), so given them the sole right to "align" AI with their desires seems extremely unethical.
Given the shitshow the current board of open Ai has managed to create out of nothing I'd not trust them with a blunt pair of scissors let alone deciding what alignment is.
Let's say someone figures out alignment. We develop models that when plugged into the original ones either in the training as extra stages or as a filter that runs on top. What prevents anyone from just building the same architecture and leaving any alignment parts out Practically invalidating whatever time was spent on it.
Who gets to decide what constitutes a “bad actor”? Sounds an awful lot like “dangerous misinformation”. And based on the last three years “dangerous misinformation” quite often means “information that goes against my narrative”
It’s a slippery slope letting private or even public entities define “bad actors” or “misinformation”. And it isn’t even a hypothetical… plenty of factually true information about covid got you labeled as a “bad actor” peddling “dangerous misinformation”.
Letting private entities whose platforms have huge influence on society decide what is “misinformation” coming from “bad actors” has proven to be a very scary proposition.
Meta’s products have damaged and continue to damage the mental health of hundreds of millions of people, including young children and teenagers.
Whatever their motivation to release models, it’s a for-profit business tactic first. Any ethical spin is varnish that was decided after the fact to promote Meta to its employees and the general public.
Meta? What about Snap? What about Tinder? Youtube?
Do you have a bone to pick with Meta, the whole internet, or the fact that you wish people would teach their kids how to behave and how long to spend online?
I was illustrating their problem has to be with all social media, not specifically Meta. If you believe Meta does something different from those others you can say that!
> If you believe Meta does something different from those others you can say that!
Yes. Such as profiting off of inflammatory posts and ads which incited violence and caused a genocide in Myanmar of Rohingya muslims with Meta doing nothing to prevent the spread other than monetizing off of it. [0]
There is no comparison or any whataboutsim that comes close to that which Meta should entirely be responsible for this disaster.
This feels like criticising a bar for “enhancing the inflammatory views of its customers” who then go on to do terrible things. Like, I suppose there is some influence but when did we stop expecting people to have responsibility for their own actions? Billions of people are exposed to “hate speech” all the time without going around killing people.
I’m triggered by the racism implicit in the post. The implication is that the Burmese are unsophisticated dupes and it is the white man’s burden of Zuck to make the behave.
To be precise despite the literal use of “what about” this isn’t really whataboutism.
Consider instead an American criticising PRC foreign policy and the Chinese person raising US foreign policy as a defence. It’s hardly likely that the respondent’s argument is that all forms of world government are wrong. These arguments are about hypocrisy and false equivalence.
In contrast, the person to whom you replied makes a good point that there are many businesses out there who should share responsibility for providing addictive content and many parents who are responsible for allowing their children to become addicted to it.
This is absolutely not just "bad parenting". When sending children to school they are now immersed in an online culture that is wholly unaligned with their best interests. There is no "good parenting" strategy that can mitigate the immense resources being poured into subverting their attentional systems for profit. Even taking away their smart phone is no solution: that requires their social exclusion from peers (damaging in itself for child development).
You can teach them how to use social media responsibly. Or allow them a phone but limit social media usage (though I prefer the first approach). It’s not like everyone is harmed, the same studies find a positive effect for a significant minority.
After Thiel and Zuckerberg colluded with Cambridge Analytica to use military grade psychological warfare to scoop the elections for Trump and Johnson you still are naive enough to trust either of them?
That thread is simply unhinged. There is no terrorist with a wet lab who outright refuses to read papers and instead relies on a chatbot to work with dangerous agents.
I'm fairly sure I'd describe all terrorists as unhinged.
Also, we've got plenty of examples of people not reading the instructions with AI (those lawyers who tried to use ChatGPT for citations), and before that plenty of examples of people not reading the instructions with anything and everything else. In the case of terrorists, the (attempted) shoe bomber comes to mind, though given quite how bad that attempt was I question the sanity of everyone else's response as many of us are still taking off shoes to go through airport security.
The goal is also to develop systems that are significantly more capable than current systems. And those systems could be misused when terrorists gain access to them. What about that is "unhinged"?
It's unhinged because one could make slippery slope arguments about any technology killing millions of people.
In the cold war era, the government didn't even want cryptography to become generally available. I mean, what if Soviet spies use it to communicate with each other and the government can't decode what they're saying?
Legislators who are worried about technology killing people ought to focus their efforts on the technologies that we actually know kill people, like guns and cigarettes. (Oh but, those industries are donating money to the politicians, so they conveniently don't care much.)
Cryptography can't be used to produce weapons of mass destruction. It's a purely defensive technology. Engineered superviruses are a whole different caliber.
An AI that would be like an Illustrated Primer or the AIs from Fire Upon Deep is the dream from which we are currently far, doubly so for open source models. I wouldn't trust one with a sauerkraut recipe, let alone the instructions for a doomsday device. For the forseeable future, models cannot be relied upon without external resources to augment them. Yet even augmented with references, it's still proving to be a bigger challenge than expected to get reliable results.
If anyone releases all the weights of a model that does everything perfectly (or at least can use the right tools which I suspect is much easier), that model is far too valuable to make it disappear, and dangerous enough to do all the things people get worried about.
The only way to prevent that is to have a culture of "don't release unless we're sure it's safe" well before you reach that threshold.
I'm happy with the imperfections of gpt-3.5 and 4, both for this reason and for my own job security. But chatGPT hasn't even reached its first birthday yet, it's very early days for this.
You mean a complete hypothetical outside of scifi? Lets start worrying about alien invasions too?
Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.
The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.
AI X-risk is a complete sham being used to try and control a new, powerful tool. Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment. AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.
> You mean a complete hypothetical outside of scifi?
18 months ago, so was having an AI make even so much as a toy website by drawing a sketch on a sheet of paper, taking a photo, captioning it "make me a website that looks like this", and pressing the "go" button.
> Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.
So much irony there.
No, the planet is not becoming uninhabitable. Bits of it are, and this is bad, and this is leading to mass migration which is causing political drama.
Lots of people out there get benefits from the things that cause all the various kinds of pollution, from hyper-local things like littering and fly tipping to global things like CO2 and CFCs, and the arguments they use are sometimes the same ones you just used — things like "I am so tired of these Greta Thunberg people thinking humans can change the environment. The hubris."
Also, no, nobody thinks we've already created a machine god. We think we might, eventually, with a lot of concerted effort, be able to make something that's somewhat better at every cognitive task than any human, but not only do even the most optimistic estimates place that several years away, but quite a lot of people are already going "that has so many ways it can go wrong, let's not do that".
Finally, one of the ways it can go wrong is basically hyper-capitalism: an AI tasked with making as much money as possible, doesn't necessarily come with the sort of mind that we have which feels shame and embarrassment when their face is put on an effigy and burned by people that would like their environment to not be polluted.
> The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.
Nah, don't need a single breakthrough, we've got sufficient known solutions to solve it all already even if there's not a single new idea. Just building out the existing research-level tech for storage and renewables is more than good enough for energy and transport, similarly there already exists solutions for other domains.
Also, AI isn't just LLMs and non-LLM AIs do actually help with this kind of research, it's just not exciting to the general public because 50 pages of "here's how we Navier-Stoked ourselves a new turbine design" will have most people's eyes glaze over.
Unfortunately, and directly relevant to your concerns about pollution, the fact AI means more than LLMs also means that last year a team working on using AI to test chemicals for safety before they get manufactured… found 40,000 new chemical weapons in 6 hours by flipping a sign from "find safe" to "find unsafe": https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
> Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment.
Yes.
> AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.
False, all false. AI can easily follow the scientific method, and indeed AI is basically just applied statistics so it does this by default and the hard part is to give it heuristics so it doesn't have to on things we are literally born knowing, like faces.
Likewise, trial and error: that's what almost every model is doing almost all the time during their training. Only the most trivial ones can have weights calculated directly.
Also, physical embodiment is a huge field all by itself. Tesla's cars and robots, Boston Dynamics — and, surprisingly, there's even a lot of research connecting robots to LLMs: https://github.com/GT-RIPL/Awesome-LLM-Robotics
Finally, "online learning" is only one of many ways to update models from usage; ChatGPT does something (not necessarily online learning but it could be) with the signals from the thumbs up/down and regenerate buttons to update either the model or the RLHF layer in response to them. Even the opposite of online learning, offline learning (AKA batch learning), can update models in response to new data. The term you were probably after is "incremental learning" (which can be implemented in either a batched or online fashion), and one way you can tell that an LLM (OpenAI or other) is doing this by watching the version number change over time.
Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.
> No, the planet is not becoming uninhabitable
We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.
> lots of people out there get benefits from the things that cause all the various kinds of pollution
Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.
> AI can easily follow the scientific method,
It can't interact with the world so it can't perform science. Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.
Making stuff in real life is really hard even with humans. We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.
> Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.
If you were expecting Photoshop, an image manipulator, to produce a website, which is a mixture of HTML (text) and images, on the basis of a combination of a prompt and an example image… then you were more disconnected from the state of AI research at that time than you're accusing me of being now.
> We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.
There are many known solutions, both to the destruction and the pollution, and indeed to feeding people in closed systems. All we have to do for any of these is… implement them.
>> lots of people out there get benefits from the things that cause all the various kinds of pollution
> Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.
The "breakthroughs" are in the past, we've already got them — we just need to do them.
>> AI can easily follow the scientific method,
> It can't interact with the world so it can't perform science.
Can too, so you're wrong. In fact, most science these days involves tools that are controlled by computers, so it would be less wrong (but still a bit wrong) to say that humans can't do science.
> Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.
> Making stuff in real life is really hard even with humans.
Most of the problems with manufacturing these days are specifically the human part of it. Computer memory used to be hand-knitted, we don't do that for modern computers and for good reason.
> We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.
Simultaneously irrelevant (lots of research doesn't involve fabric handling), and false.
So incredibly and wildly false that when I searched for examples, I got a page of sponsored adverts for different fabric handling robots before the content.
Here's the first non-sponsored search result, a corporate video from a year ago, so unlikely to be state-of-the-art today: https://www.youtube.com/watch?v=2JjUnKpsJRM (They're specifically about re-shoring sewing away from sweatshops).
> All we have to do for any of these is… implement them.
An idea isn't a solution. I don't know what you are even talking about. Until we are actually solving these problems in a substantial way we have nothing but hope, we don't know that anything will pan out.
> Can too.
There is no 100% automated lab. Tools being controlled by a computer doesn't mean they are loaded, prepared and most importantly maintained by humans. And Science requires different types of labs, I just watched a documentary about the making of the new Malaria vaccine, and how challenging it was to produce the ~cup of vaccine needed for clinical trials vs producing enough for validation was fascinating.
> Irrelevant
no it's not. We are so far from 100% automation of anything. Some human being has to install and maintain literally everything in every factory. Nobody is making self maintaining bots, much less ones that can do everything.
> So incredibly and wildly false
Comparing human seamstresses to even the latest crop of Robotic fabric handlers(that haven't seen mass market penetration best I can tell, so are still unproven in my book) is like comparing OSMO to a construction worker. It's not false. That video, which I watched with interest, is not convincing at all, having seen more traditional jeans making places.
> Most of the problems with manufacturing these days are specifically the human part of it.
Seriously? This is just silly. Everyone knows the barrier to terrorists using bio weapons is not specialist knowledge, but access to labs, equipment, reagents etc.
It's the whole Guttenberg's printing press argument. "Whoaa hold on now, what do you mean you want knowledge to be freely available to the vulgar masses?"
The only difference with LLMs is that you do not have to search for this knowledge by yourself, you get a very much hallucination prone AI to tell you the answers. If we extend this argument further why don't we restrict access to public libraries, scientific research and neuter Google even more. And what about Wikipedia?
>Everyone knows the barrier to terrorists using bio weapons is not specialist knowledge, but access to labs, equipment, reagents etc.
An LLM could help you get that access, or help you make do without it.
>It's the whole Guttenberg's printing press argument. "Whoaa hold on now, what do you mean you want knowledge to be freely available to the vulgar masses?"
We're fortunate that intelligent, educated people typically don't choose to become terrorists and criminals.
Every generation of improved LLMs has the potential to expand the set of people who could construct bioweapons.
A Thanksgiving turkey could have a wonderful life until late November when it gets slaughtered out of the blue. We can't just count on trends to continue indefinitely -- a famous example would be the 2008 financial crisis, before which people assumed that "housing prices always go up".
It's just common sense to forecast the possibility of extreme risks and think about how to mitigate them. And yes, I favor across the board restrictions on information deemed sensitive. But people publishing open source LLMs should have an obligation to show that what they're releasing will not increase the likelihood of catastrophic risks.
All of the information AI regurgitates is either already available online as part of its corpus (and therefore the AI plays no particular role in access to that information), or completely made up (which is likely to kill more terrorists than anyone else!)
Reiterating other comments, terrorists can't make bioweapons because they lack the facilities and prerequisites, not because they're incompetent.
The "all the info is already online" argument is also an argument against LLMs in general. If you really believe that argument, you shouldn't care one way or another about LLM release. After all, the LLM doesn't tell you anything that's not on Google.
Either the LLM is useful, in which case it could be useful to a terrorist, or it's useless, in which case you won't mind if access is restricted.
Note: I'm not saying it will definitely be useful to a terrorist. I'm saying that companies have an obligation to show in advance that their open source LLM can't help a terrorist, before releasing it.
If LLMs are set to revolutionize industry after industry, why not the terrorism industry? Someone should be thinking about this beyond just "I don't see how LLMs would help a terrorist after 60 seconds of thought". Perhaps the overall cost/benefit is such that LLMs should still be open-source, similar to how we don't restrict cars -- my point is that it should be an informed decision.
And we should also recognize that it's really hard to have this discussion in public. The best way to argue that LLMs could be used by terrorists is for me to give details of particular schemes for doing terrorism with LLMs, and I don't care to publish such schemes.
[BTW, my basic mental model here is that terrorists are often not all that educated and we are terrifically lucky for that. I'm in favor of breakthrough tutoring technology in general, just not for bioweapons-adjacent knowledge. And I think bioweapons have much stronger potential for an outlier terrorist attack compared with cars.]
Top AI researchers like Geoffrey Hinton say that large language models likely have an internal world model and aren't just stochastic parrots. Which means they can do more than just repeating strings from the training distribution.
Facilities are a major hurdle for nuclear weapons. For bioweapons they are much less of a problem. The main constraint is competency.
I think you might want to take a look at some of the history here, and particularly the cyclical nature of the AI field for the past 50–60 years. It’s helpful to put what everyone’s saying in context.
The bottleneck for bioterrorism isn't AI telling you how to do something, it's producing the final result. You wanna curtail bioweapons, monitor the BSL labs, biowarfare labs, bioreactors, and organic 3D printers. ChatGPT telling me how to shoot someone isn't gonna help me if I can't get a gun.
I think it's mainly an infohazard. You certainly don't need large facilities like for nuclear weapons that could easily be monitored by spy satellites. The virus could be produced in any normal building. And the ingredients are likely dual use for medical applications. This stuff isn't easy to control.
Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.