Hacker News new | past | comments | ask | show | jobs | submit login

That's why we're not already dead.

If anyone releases all the weights of a model that does everything perfectly (or at least can use the right tools which I suspect is much easier), that model is far too valuable to make it disappear, and dangerous enough to do all the things people get worried about.

The only way to prevent that is to have a culture of "don't release unless we're sure it's safe" well before you reach that threshold.

I'm happy with the imperfections of gpt-3.5 and 4, both for this reason and for my own job security. But chatGPT hasn't even reached its first birthday yet, it's very early days for this.




> The only way to prevent that

You mean a complete hypothetical outside of scifi? Lets start worrying about alien invasions too?

Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.

The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.

AI X-risk is a complete sham being used to try and control a new, powerful tool. Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment. AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.


> You mean a complete hypothetical outside of scifi?

18 months ago, so was having an AI make even so much as a toy website by drawing a sketch on a sheet of paper, taking a photo, captioning it "make me a website that looks like this", and pressing the "go" button.

> Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.

So much irony there.

No, the planet is not becoming uninhabitable. Bits of it are, and this is bad, and this is leading to mass migration which is causing political drama.

Lots of people out there get benefits from the things that cause all the various kinds of pollution, from hyper-local things like littering and fly tipping to global things like CO2 and CFCs, and the arguments they use are sometimes the same ones you just used — things like "I am so tired of these Greta Thunberg people thinking humans can change the environment. The hubris."

Also, no, nobody thinks we've already created a machine god. We think we might, eventually, with a lot of concerted effort, be able to make something that's somewhat better at every cognitive task than any human, but not only do even the most optimistic estimates place that several years away, but quite a lot of people are already going "that has so many ways it can go wrong, let's not do that".

Finally, one of the ways it can go wrong is basically hyper-capitalism: an AI tasked with making as much money as possible, doesn't necessarily come with the sort of mind that we have which feels shame and embarrassment when their face is put on an effigy and burned by people that would like their environment to not be polluted.

> The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.

Nah, don't need a single breakthrough, we've got sufficient known solutions to solve it all already even if there's not a single new idea. Just building out the existing research-level tech for storage and renewables is more than good enough for energy and transport, similarly there already exists solutions for other domains.

Also, AI isn't just LLMs and non-LLM AIs do actually help with this kind of research, it's just not exciting to the general public because 50 pages of "here's how we Navier-Stoked ourselves a new turbine design" will have most people's eyes glaze over.

Unfortunately, and directly relevant to your concerns about pollution, the fact AI means more than LLMs also means that last year a team working on using AI to test chemicals for safety before they get manufactured… found 40,000 new chemical weapons in 6 hours by flipping a sign from "find safe" to "find unsafe": https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

> Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment.

Yes.

> AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.

False, all false. AI can easily follow the scientific method, and indeed AI is basically just applied statistics so it does this by default and the hard part is to give it heuristics so it doesn't have to on things we are literally born knowing, like faces.

Likewise, trial and error: that's what almost every model is doing almost all the time during their training. Only the most trivial ones can have weights calculated directly.

Also, physical embodiment is a huge field all by itself. Tesla's cars and robots, Boston Dynamics — and, surprisingly, there's even a lot of research connecting robots to LLMs: https://github.com/GT-RIPL/Awesome-LLM-Robotics

Finally, "online learning" is only one of many ways to update models from usage; ChatGPT does something (not necessarily online learning but it could be) with the signals from the thumbs up/down and regenerate buttons to update either the model or the RLHF layer in response to them. Even the opposite of online learning, offline learning (AKA batch learning), can update models in response to new data. The term you were probably after is "incremental learning" (which can be implemented in either a batched or online fashion), and one way you can tell that an LLM (OpenAI or other) is doing this by watching the version number change over time.


> 18 months ago, so was having an AI make

Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

> No, the planet is not becoming uninhabitable

We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.

> lots of people out there get benefits from the things that cause all the various kinds of pollution

Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.

> AI can easily follow the scientific method,

It can't interact with the world so it can't perform science. Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.

Making stuff in real life is really hard even with humans. We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.


> Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

What you were "waiting for" is highly irrelevant. People wait for AI science fiction, the relevant thing is that it increasingly becoming real.


> Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

If you were expecting Photoshop, an image manipulator, to produce a website, which is a mixture of HTML (text) and images, on the basis of a combination of a prompt and an example image… then you were more disconnected from the state of AI research at that time than you're accusing me of being now.

> We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.

There are many known solutions, both to the destruction and the pollution, and indeed to feeding people in closed systems. All we have to do for any of these is… implement them.

>> lots of people out there get benefits from the things that cause all the various kinds of pollution

> Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.

The "breakthroughs" are in the past, we've already got them — we just need to do them.

>> AI can easily follow the scientific method,

> It can't interact with the world so it can't perform science.

Can too, so you're wrong. In fact, most science these days involves tools that are controlled by computers, so it would be less wrong (but still a bit wrong) to say that humans can't do science.

> Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.

Irrelevant.

Also, do actually follow that link I gave you before: https://github.com/GT-RIPL/Awesome-LLM-Robotics

> Making stuff in real life is really hard even with humans.

Most of the problems with manufacturing these days are specifically the human part of it. Computer memory used to be hand-knitted, we don't do that for modern computers and for good reason.

> We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.

Simultaneously irrelevant (lots of research doesn't involve fabric handling), and false.

So incredibly and wildly false that when I searched for examples, I got a page of sponsored adverts for different fabric handling robots before the content.

Here's the first non-sponsored search result, a corporate video from a year ago, so unlikely to be state-of-the-art today: https://www.youtube.com/watch?v=2JjUnKpsJRM (They're specifically about re-shoring sewing away from sweatshops).


> All we have to do for any of these is… implement them.

An idea isn't a solution. I don't know what you are even talking about. Until we are actually solving these problems in a substantial way we have nothing but hope, we don't know that anything will pan out.

> Can too.

There is no 100% automated lab. Tools being controlled by a computer doesn't mean they are loaded, prepared and most importantly maintained by humans. And Science requires different types of labs, I just watched a documentary about the making of the new Malaria vaccine, and how challenging it was to produce the ~cup of vaccine needed for clinical trials vs producing enough for validation was fascinating.

> Irrelevant

no it's not. We are so far from 100% automation of anything. Some human being has to install and maintain literally everything in every factory. Nobody is making self maintaining bots, much less ones that can do everything.

> So incredibly and wildly false

Comparing human seamstresses to even the latest crop of Robotic fabric handlers(that haven't seen mass market penetration best I can tell, so are still unproven in my book) is like comparing OSMO to a construction worker. It's not false. That video, which I watched with interest, is not convincing at all, having seen more traditional jeans making places.

> Most of the problems with manufacturing these days are specifically the human part of it.

Because the human part is by far the hardest.

> do actually follow that link I gave you before https://github.com/GT-RIPL/Awesome-LLM-Robotics

Ok and? nice Gish Gallop I guess?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: