Hacker News new | past | comments | ask | show | jobs | submit login
Kickstarter shuts down campaign for Unstable Diffusion amid changing guidelines (techcrunch.com)
171 points by nonbirithm on Dec 25, 2022 | hide | past | favorite | 275 comments



This sort of thing is why, for all their faults, I think we need decentralisation and “crypto” to succeed.

We need to replace platforms like Kickstarter with standards and protocols so that shutting things down because someone doesn’t like them (or even preventing people from behaving illegally, especially if you’re not the cops) can be made impossible.

Let bad ideas be rejected by society, not blocked by someone claiming to act for society. Because inevitably they don’t represent everyone, and they always end up acting with self interest, no matter how good the initial intentions.


>Let bad ideas be rejected by society

This has never worked. Society is filled with people who have to work every hour just to afford the essentials of life and can't spare a few hours to research what refrigerator will last the longest for the money much less understand content creation methodology.

When you have to decide whether to pay for a dentist bill or your retirement you have no ability to vote with your wallet over something as trivial as AI art.


Of course freedom has worked, what...

Yeah, you know what they say about good art: it must play by the rules.


Laws have worked.

The market never solved the fact that it's cheaper to dump things in rivers and lakes. It never solved the fact that optimizing vehicles for the safety and comfort of those inside them is a huge risk to pedestrians, cyclists, and others outside the vehicle. It never solved the fact that it's cheaper to just keep perishable food on the shelves until it sells. It never solved the fact that claiming a medicine solves illnesses is very different from ensuring that it solves them.


> Laws have worked.

Huh?

> The market never solved

I mean do you buy products that are cleaner, better to their employees, blah blah...? Do most people, if the opportunity is there? If the opportunity is not there, why not?

Seems like you're going off into space here. Capitalism and private ownership of things and ideas is great. Sorry?


Most people do not.

Not all options are available to everyone. If the local grocer stocks one brand of milk, that is what people buy. If I need medical assistance I'm not going to research which hospitals near me stock ethically sourced bandages before calling an ambulance.

No one has time to do detailed research on every company they buy things from. I have no idea how Speedway gas treats its employees. I don't even know if their gas is better or worse than competitors. I do know that they're on the route between work and home.

Most people don't have the resources to act on these decisions. Kroger is an awful company that treats its employees like trash, but that doesn't magically make everyone able to afford Whole Foods proces. This is assuming there are multiple stores in my market, or multiple stores equally accessible to transit users, etc.


> Most people do not.

Can you quote what you're replying to?

> Not all options are available to everyone

???

> No one has time

???

> Most people don't have the resources

You're off in space. Not sure what you're on about, and I'm sure you don't know either. You don't have the solutions to the world's problems, you don't even have your life together.

A question for you: how long are a man's legs?


Your post had exactly three questions:

>I mean do you buy products that are cleaner, better to their employees, blah blah...? Do most people, if the opportunity is there? If the opportunity is not there, why not?

My reply addressed all three points. The first sentence (most people do not) was a direct response to your first two questions ("do most people?"). The rest of the post was an answer to your third.

I no longer find this conversation productive, and this will be my final reply to you.


You didn't answer if you buy goods intelligently. So I'll assume no you don't buy good products, cool.

You're surrounded by amazing advancements, only to spit in it's face: not good enough. LOL


> people who have to work every hour just to afford the essentials of life

Yeah, this isn't true at all. 40 hours per week, with strong governmental protections to compensate for more than this.

Most people clock out and go home to watch television or scroll social media. The statistics don't lie[1][2].

This meme that the "poor, downtrodden masses" are just too overworked to do anything productive with their free time has been disproven time and time again. Human beings are dopamine response state machines living in a Skinner Box. We have plenty of free time, but we consistently make poor decisions with regard to how to use it.

[1] https://www.forbes.com/sites/petersuciu/2021/06/24/americans...

[2] https://www.statista.com/statistics/411775/average-daily-tim...


Cherry-picked statistics from a time when lockdowns were in place and people were stuck in their homes desperate for escapism do not prove your point.

https://www.theguardian.com/us-news/2022/nov/05/multiple-job...


> someone doesn’t like them

AI art is literally working against what Kickstarter was made for.

AI art is about putting together the work that artists have done in a way that isn't technically theft (well it might be the law is very slow about reacting to this thing and you can't go after someone for copyright violation as a class action lawsuit)

If someone wanted to pay artists to create a brand new collection of art to train AI on that would be fine. However the reality is current AI art is all farmed from publicly "available" images. (Copyright law currently allows you to display images for public consumption without worrying about those images being re-appropriated, AI art bypasses that)

> Let bad ideas be rejected by society

Except society doesn't understand what is going on and explaining it requires expert testimony. Of course you can discard those complaints by just making shit up and sound nearly as convincing.

> Because inevitably they don’t represent everyone

Kickstarter didn't say they were doing this for everyone. They explicitly said they were doing it to support artists. Artists haven't universally called out AI art as theft but a ton have.


> AI art is about putting together the work that artists have done in a way that isn't technically theft (well it might be the law is very slow about reacting to this thing and you can't go after someone for copyright violation as a class action lawsuit)

How do you think humans learn art? Humans aren't magic, they learn to make art by looking at and being inspired by other art, and I'd bet quite a lot of that art is copyrighted. AIs are just doing the same thing but way faster, and I don't see any issue with that. (Same with GitHub Copilot for that matter.)


> AIs are just doing the same thing but way faster, and I don't see any issue with that.

Isn't this like saying "cars are just doing the same as horse drawn carriages but faster, they shouldn't be regulated differently"? That "faster" changes the game completely. Similarly, many of the laws applying to physical actions can't be immediately transposed to electronic actions.

The current rules were made under the assumption that we're dealing with humans and "natural" laws. Like the fact that humans eventually die while an AI can live forever or transfer the entirety of useful knowledge to the next generation. A human doesn't have time to steal in a lifetime as much as an AI can do in mere minutes. A human can be punished for breaking the law, especially for repeat offenses, whereas an AI can't be even if it breaks the law more times than all human artists put together. These are all complete game changers, speed is only one of the many compounding factors. Scale changes things, a punch in the face isn't just "a gentle tap, but faster". An AI can suck in all of human knowledge and work with all of it at the speed of light.

> How do you think humans learn art?

Humans as a race have created art from literally nothing. Humans as individuals have constantly advanced it with original material. AI seems to be by definition derivative. Humans are encouraged to create something as original as possible and attacked if they borrow too much, AI in "encouraged" to copy as much as possible and as this discussion stands to prove, also to "borrow".

This at the very least invalidates the too simplistic view that AI is the same as humans but faster.


> Humans as a race have created art from literally nothing.

No, they didn't? Humans don't have any magic creative juices that allow them to come up with ideas out of thin air, that's silly. Ideas are created from our brain using our past experiences and knowledge, why do you think some of our earliest art is of animals?

Also, I'm not sure comparing artists to horse-drawn carriages is the best analogy to support your argument, considering horse-drawn carriages are almost nonexistent nowadays. I'd argue a better analogy would be something like chess or any other game. When computers beat humans at chess, nobody changed the rules of chess to make computers perform worse. People just acknowledged that computers were better.


I'm not sure if you understand my point and you're also undermining your own points almost as soon as you make them. I don't want to stop computers in their tracks. To this day in the interest of fairness we apply different rules even between humans in a lot of things. Suddenly bundling computers and humans under the same rules is a severely flawed idea.

> Ideas are created from our brain using our past experiences and knowledge

Humans experienced random things in life and some created art from it. Sometimes after hallucinogenic drugs they created art even from things that never existed. Then other humans created something new on top of the existing art. AI "experienced" already created art and was tasked to recombine pieces to create more art. Simply put, nobody gave an AI sounds of nature and had it come up with a symphony after some thousands of human years' equivalent of processing.

> Also, I'm not sure comparing artists to horse-drawn carriages is the best analogy to support your argument, considering horse-drawn carriages are almost nonexistent nowadays.

Not sure why this harms the analogy, it's meant to point out that "speed" makes a difference and changes the rules. Artists may also be nonexistent tomorrow. If AI is doing everything legitimately and creating art the same way as humans but faster and better, why would human artists continue to exist? And then what would you feed as training material to your AI? The dictionary definition of "love"? Sounds of nature? Art created by the previous AIs?

> When computers beat humans at chess, nobody changed the rules of chess to make computers perform worse.

Right, we just don't allow computers to play the game in official positions. Different rules, as it were.


in the whole discussion, I feel you are missing a point. AI creates something that looks like art to humans based on a probability distribution.

for me art is not the end product but the process. I want to understand the artists intentions and often a work that looks blant, because deep because of the artist's explanations.

Take Agnes Martin's paintings as an example.


The way you say “something that looks like art” feels like you don’t consider the output of these AI models as art, and that generally speaking you have specific views on either what art is or what counts as art…

So what of the Dadaist, the postmodern and the rest, where art is only art because we say it is art, and the creation of art need only involve the “artist” scribbling their signature on the object to bestow it with “artistic merit”. I cannot see a good way to reconcile a philosophy (law is different here, plenty of art can be illegal, eg: graffiti) involving AI and art like this… Duchamp’s readymades, are the products of a factory’s meant for consumer use, are Art… AI art is the product of an sophisticated set of multiple software tools combined and then used by a person to produce their own artistic vision, the output is Art.

If Duchamp signing a urinal makes the urinal art then me using a prompt to craft a specific artistic outcome arguably required more creative effort to understand my “tools” and use them to produce a result that should also be considered art if I want to call it that.

I accept that we may come around and regulate and legislate these tools, just like how it’s illegal to make a graffiti masterpiece on someone’s wall without permission, it may eventually be illegal to use models trained with “unlicensed” data and the matter settled… but can we quit with this denigration of the output as somehow not actually art?


It might be art but it's art created by humans using matrix multiplications and a lot of data. For me the image generators are tools. They don't have intention, consciousness, and don't "know" why they produce something. So if you are asking if AI alone can produce art, I would say no, because AI lacks these properties. You have to present a prompt and press a button, that person is the creator of the Art. The AI when prompted can just produce sth. that looks like art to us humans. The user who prompted it has to add the "process and intention" to it.

Regarding Duchamp's Urinal, the intention and the critique it voices is critical for being considered art (and a lot of people still argue about it). https://www.bbc.com/culture/article/20170410-the-urinal-that...

There have been a lot of upside down urinals in the world before Duchamp did that, yet nobody considered it art. Here again the intention matters. Matrix multiplications on big data don't have any intention.


Scale enables all sorts of things in technology that require new rules.

In theory, you could have 1000 people watching cameras with a list of people to watch for in 1980, but in practice it was too expensive and not a consideration. Now you can cheaply record tens of thousands of hours of video indefinitely and run analyses on everyone in them. This has huge implications on freedom and society and calls for new rules.

Just because technology is doing the "same thing" that was done before at scale does not make it the same thing.

AI is the same.


>https://twitter.com/sinix777/status/1604514161391677440

'Digital devices are not afforded the same "observational rights" as humans. You can't just buy an extra ticket for a video camera and take it to watch Avatar with you. And you're definitely not allowed to sell those "observations" no matter how compressed the format.'


A computer algorithm is not a human and should not have the same rights as a human.

I'm getting really tired of repeating this really simple concept that AI evangelists can't seem to understand.


I'm not saying that AI should have rights. I'm saying it's silly to call it "theft" when an AI does it, but "learning" and "inspiration" when a human does the same thing.


Humans generally recognize when they are plagiarizing. AIs don’t.


When I dated a musician, she would randomly point out all the pieces in various kinds of music that are "borrowed" or blatantly copied from other music. It is understood among musicians that almost all music are derivatives of prior art.


While yes a lot of music is based on prior art there are levels of that and there is a lot of case law on what doesn’t break copyright. There is a lot of music that steals from classical but that is public ___domain. There are rules about how much of a riff from a copyrighted song you can borrow before it becomes copyright infringement and you owe the original writer royalties. Music also has a lot of standard conventions that lead to similar conclusions at least if you are using western music theory since that is based on western classical music.

So an amount of this deals with scale and the copyright status of the prior art.


Yes, and they also put a lot of effort into intentionally disguising the fact that they're plagiarizing. AIs don't.

The real problem is that our definition of "plagiarism" did not anticipate, and was never intended to describe, anything even remotely like ML.


If you use MS Paint to copy three photos you found online you are violating their copyright. Your personal insight into how it is done might provide you your own copyright and exactly how you use the other things might make your usage fair use.

However the AI process does not create a copyright (there is no human involved) and cannot use the transformative exceptions for fair use due to, again, no human being involved.

The thing to remember is AI literally copies the originals. While it mashes them together in a way that makes the lines hard to see that is in fact what it is doing.


> The thing to remember is AI literally copies the originals. While it mashes them together in a way that makes the lines hard to see that is in fact what it is doing.

I see this argument a lot, but it cannot be true. The image database behind Stable Diffusion is 5 billion images resized to 512x512 each and compressed down to a roughly 100,000 gigabytes. But, you can run it on a model file that is as small as 2 gigabytes. The originals are not in there. They have not been compressed to 4 bits per image.

I've studied and used AI art generators for a long time now and I can tell you from both a technical and experiential POV, the images they create are not mashups.


None of the evidence you present here supports your conclusion:

20kb per image is like a jpeg worth of data, so certainly not throwing away anything.

A tiny model file without logic as to how you got to it isn't interesting. You could just throw away 99.5% (or whatever the ratio is) of the data and call it good. Mishmash data before you do and you might not even have to do that.

This style of AI is all about getting right past the human ability to perceive the mishmash and stop there. Given the underlying algorithm is literally about mishmashing I don't know why you are trying to fight that point.


0.5 bytes per image. Not 20,000 bytes per image.

Mishmashing implies that small clumps of pixels are in the model file somewhere. That's literally not true and has no relationship to how the algorithm actually works.

There's a ton of misinformation about how this works being tossed around. It's causing a lot of anger over ideas that simply false. That's what I'm fighting.


100,000 / 5 = 20,000 off the numbers given here.

Don't quote the 2 GB one that one is disingenuous.

That is like copying the dictionary heap sorting by word frequency leaving only the most popular words and saying "see I couldn't have copied the dictionary it is too small to have done that".

And no the fact that it has "all" the data isn't material here.


> The thing to remember is AI literally copies the originals. While it mashes them together in a way that makes the lines hard to see that is in fact what it is doing.

As another commenter already pointed out, this is literally not possible. The model size for Stable Diffusion isn't nearly big enough to contain all its training data. Understanding the concepts (at least on some abstract level) of the artwork it's trained on is the only possible way a 6 GB[0] model can be based on such a massive data-set.

In conclusion, the important thing to remember is that AI does not just copy and paste the originals. It's simply not possible.

[0]: https://huggingface.co/stabilityai/stable-diffusion-2-1/tree...


Do you agree with copyright laws?

Are they the best way to reward creation?

Do they serve humanity well?


> Do you agree with copyright laws?

In their entirety? No of course not.

Certainly the duration is ridiculous beyond measure.

But the idea of having no protections for your own work seems barbaric. At least for work that requires "work" for lack of a better word.

I think the laze faire nature of online memes for instance makes sense in that space.

> Are they the best way to reward creation?

I think that providing a mechanism for people to share their work without losing control of it makes sense.

> Do they serve humanity well?

While I wouldn't attribute societies recent online boon to them. I do think that having protections of this kind are one of the reasons we have seen such a huge investment in the space.

Like the current internet or not the hundreds of billions or trillions pumped into it would be harder to stomach if the internet were the wild west some would like it to be.


Honestly, I suspect creating

> a mechanism for people to share their work without losing control of it

Is impossible.


Napster thought so too


I think humans don’t learn art like statistical learning AI. What did Dali steal copyright from? I think humans are not like this in the same way that Ramanujan was not like ChatGPT.


All art, to some degree, is farmed from public imagery. That isn't a great argument for AI art being theft. All major styles of visual art came from somewhere, and people copy them anyway. If you study art in college, art history is literally part of the curriculum and you spend time trying out different mediums and styles pioneered by other people.

Truly original art is rarer than 8' tall humans, but that doesn't stop the industry from existing at scale.


AI art is more akin to uploading a copy of someone's work and claiming it is my own that inspired work.

It would be one thing if the AI were actually trained like a human was and created unique artwork based on ideas given to it. The reality is it copy/pastes parts of images it found online.


> It would be one thing if the AI were actually trained like a human was and created unique artwork based on ideas given to it.

I am not sure you understand all this, because this is exactly what Stable Diffusion does. Give or take the “like a human does”, because we don’t understand how humans work.

> The reality is it copy/pastes parts of images it found online.

It does not. This is physically impossible; the pictures used during training are orders of magnitude larger than the model. It does not contain any description of those pictures used in training and you cannot get those pictures back.


What make you think human inspiration doesn't work the same way as image diffusers? To a degree we know we are compressing input features into our own latent space and conjuring them when producing our own works. The math may not be exactly the same, but the principles are eerily similar.


It isn't surprising given this rhetoric was created by AI art producers to avoid having to pay for their training sets.


I'm basing this on the architecture of the models, not any rhetoric. How does a person draw inspiration in your words then?


I don't think you understand how ML models work at all.


I don't work with them professionally, but I've done several small projects using them, and even written smaller models from scratch. I wouldn't call myself an expert, but I definitely know more than nothing.


Luddism has a pretty long history, by now it should be clear that fighting against technological advancement is such an impossible challenge that it's utterly pointless and a massive waste of time.

While I understand why some artists are upset, the ones who succeed in the future will be the ones who use the technology in their work, instead of those who just complain and whine.


I disagree. Fighting against technology has a point. Some people are already recognizing it and using less technology, and trying to live simpler lives. Every dollar that is not spent on technology slows its progress, and if enough people quit spending money on it, it cannot develop.

Movements can grow unexpectedly and seemingly out of nowhere. It may be possible that enough people will enjoy the idea of living a life unencumbered by pointless junk and if so, their actions can have the power to shut down companies like OpenAI and the like.

If I could convince even one other person on this planet of the dangers of technology, I would consider the effort I put into talking about it worth it. In fact, I myself have been convinced by other people and in turn I have already spent far less money on technology because of it. Maybe it's a drop in the ocean, but drops of rain can become floods, and one day I truly hope that flood will come to completely shut down efforts like UnstableDiffusion and similar projects.


Honestly I think that's an absurd pipedream, OpenAI can't keep their servers stable due to the massive demand and Unstable Diffusion has over 100 000 users in their Discord.

To hope that they shut down due to a lack of users is just a thought process that is only guaranteed to cause unhappiness.


Unhappiness? That must be a strange definition of happy you are using. All those people can go on to amuse themselves some other way. It's as silly as saying that people would be unhappy if no more Marvel movies were made. It's just a little diversion and people would go on to do something else.

I think it's very unlikely that true happiness is caused by the highly specific diversions we have created in society. I can put myself out there -- I have hobbies too. And if they stopped, I would miss them a little. But I certainly wouldn't go far as to say that would make me unhappy.


I think you're whooshing here.

Thinking you can control others will lead to unhappiness, I think is their point.


I don't aim to control anyone. Only not support AI and find other people of a similar mindset to do so, and spread this message. People can of course ignore it.


Uh huh....


Build an AI using copyright free work and we can talk.

Stealing other's work under the guise of "advancement" isn't futurism...


As a content creator, I believe unstable diffusion and AI models is theft, at least in a moral sense. Computer programmers have taken all the art that humans have worked hard to produce, and are producing it themselves at a rate much faster than any artist can produce it and reaping the financial benefits. Because AI is so general, it operates on a scale not previously known in the realm of automation, and so it's too much of an unfair advantage -- far more so than someone coming up with an invention that makes a few other previous inventions obsolete.

AI is like an alien race coming to our planet. The alien race is so advanced that it can do everything better than any human, and they use their technology to sell us perfect products of every nature in return for degrading acts. Such a thing disrupts the fabric of society, which is held together by people needing each other.

Computer programmers and tech companies are not just robbing artists, but robbing all of humanity of the necessities that we need in order to have a fair chance at this world.


> Computer programmers and tech companies are not just robbing artists, but robbing all of humanity of the necessities that we need in order to have a fair chance at this world

A fair chance at what exactly? Creating corporate memphis for Big Co?

Besides, isn't AI just a tool that humans use? Unless I am missing something, people still prompt the AI, curate it's output, and edit the final results. Perhaps this will make art accessible to more people, thus giving everyone a fairer chance in this world.


You are right in some respects. There are moral issues, and this is a machine coming for (parts of) your job.

That said, it is not theft, however you slice it. What you express is what a lot of other workers feel when they see automation and mechanisation making progress in squeezing them out of their jobs. It is normal to be afraid, but you are not going to stop the tide. Some people will get out of business, some people will keep doing their thing until they retire, and over a generation or two a shift will have happened.

If it makes you feel better, pretty much no job is immune from this, in the same way that all manual labour is susceptible to be replace with robots. Other people will feel the same way you do now before long.


If it's not reproducing protectable expressions then it's not theft on any level, but merely another tool for producing art, albeit at an industrial scale. This happens in every profession eventually. It's simply your turn.

For my part, I'm quite content with this state of affairs. Now, rather than paying exorbitant amounts of money for lackluster results from artists, people can use so-called AI to generate decent results on the cheap. On a long enough timeline this should result in the price of art assets for games dropping into the basement where they belong, taking the price of games with them in the process. The public benefits, while intermediaries lose. I fail to see the problem.

TL;DR Humanity should not be compelled to dwell in darkness so candlemakers can continue to profit.


I certainly don't see how not using AI is dwelling in darkness. We can have a very nice and functional society without it.

You also say that you hope AI will drop the price of art to the basement where it belongs. What if AI drops the cost of generating a computer program to the basement where it belongs, and programmers will be paid minimum wage? Would you be so content then?

Frankly, the attitude on this board is dismaying, but that is expected because the people who are benefiting from AI with higher salaries are currently the programmers. It has nothing to do with logic or fairness, only selfishness.


You clearly aren't familiar with the state of advancement in being able to develop an app in the last 40 years.

I have a rigorous academic background in computer science and you don't see me throwing a tantrum that your average layman today can fire up unity or RPG maker to create games without having a firm grasp of rigid body Newtonian physics, game loops, asset management, graphic drivers, and handrolled x86 assembly.


I started programming on a 486 and graphing calculators and have used some pretty modern frameworks and languages as well. It's great that you have an academic background. I assure you I have a very rigorous one also, including publications, and I have a very deep knowledge of many technical disciplines.

My argument is not that we should avoid everything that makes tasks easier, but that AI does it on a fundamentally different scale, and that should be treated with caution.

It's like salt. I put salt on my food, but not too much. Some restaurants put way too much salt on their food but you can still eat there. AI is like eating handfuls of it, which will kill you.


> What if AI drops the cost of generating a computer program to the basement where it belongs, and programmers will be paid minimum wage? Would you be so content then?

I'd be more than happy to get rid of the "programmers" that are generating the same quality code as Copilot. Engineers are employed to translate human needs into tangible system designs, that they have to write code to do so is mostly an artefact of our current system-building methodology. In fact, I would be immensely happy if I could get rid of that part.


> Engineers are employed to translate human needs into tangible system designs,

I mean, isn't this with achievable prompt refinement?

One could argue that it wouldn't satisfy some $arbitrary_dimension (scalability, accessibility, whatever), but you can imagine it wouldn't be hard to get there with prompt refinement eventually, especially on an AI trained to satisfy SWE requirement (and not talking about GH's Copilot,which is an early application of a fairly young technology)


Right now, how DALL-E works is that you enter a prompt, it gives you a few images, you fine tune your prompt, and after a few rounds you may pick an image and maybe polish it before you can use it.

What would the co-pilot equivalent be? You give it a prompt, it generates a few repos, but then you still have to understand the whole project to judge that it respects the system needs and restrictions. After a few rounds, maybe you will have to change up the code a little, but 90% code will be written by the machine. I, for one, welcome this future -- because once again, this "prompt refinement" iteration is where our human creativity is used most effectively; I think it's a significant improvement over writing boilerplate code every time.


Then again, it also seems we perform worse when using AI tools, so maybe I'm wrong https://arxiv.org/abs/2211.03622


> I certainly don't see how not using AI is dwelling in darkness. We can have a very nice and functional society without it.

We can also have a very nice and functional society without penicillin, hot water, or fire... but nobody argues in favor of getting rid of those things just because doing so would make lives more profitable for the few. In fact, most people would be horrified by the selfishness of such a suggestion.

> You also say that you hope AI will drop the price of art to the basement where it belongs. What if AI drops the cost of generating a computer program to the basement where it belongs, and programmers will be paid minimum wage? Would you be so content then?

I would, yes, since I'm not employed as a software engineer and I'm tired of cleaning up after those who are who can't write programs worth a damn.

> Frankly, the attitude on this board is dismaying, but that is expected because the people who are benefiting from AI with higher salaries are currently the programmers. It has nothing to do with logic or fairness, only selfishness.

...or we recognize your class interest for what it is and we are not impressed. Everyone is made obsolete at some point. shrug Learn to code, I guess?


1. Of course I don't argue for getting rid of penicillin. Each invention has its own merits and each should be debated separately without making an analogy to another.

2. I am not personally worried about being made obsolete. I can code just fine and have extensive coding experience, and I can do a lot of other things as well. Nonetheless, I am still opposed to it. My opinion is just that we should slow down, examine how disruptive AI can be.

3. I only ask that we be more cautious towards technology. Currently our modus operandi is invent and release anything, as long as it makes money, which I think is crazy.


Why do you think the price of art assets for games belong in the basement? Do you value it so little? What about games themselves?


Not sure of this is what's they're referring to, but there are people who misrepresent their skills, and they tend to produce unsatisfactory results when commissioned. Scammers, I guess? This particular scam seem to be inbreeding within the indie gamedev community (no proof on this, just a hunch based on anecdotes).

Funny enough (because it's about AI Vs Artists), you can see some of these scammers here.

https://youtu.be/5GO2xKmZsVo


Because of the massive needs for modern AAA games, budgets have increased dramatically, which leads to more risks for publishers and distributors, which leads to more conservatism in game design. Look at things like open-world games: we cannot have 2 models for a given type of objects, so we have a lot of designers churning out variations. This is mindless, thankless, and hugely expensive. I soir think that more AI to speed up this sort of things would be a good thing. Yes, there would not be as much demand for 3D modellers. Instead of having 20 people, you’d have 3 and a bunch of computers. On the other hand, it would be much easier to create large assets for both AA and AAA games.

As usual, the devil is in the details. But we need more nuance in these debates.


Rather I value AAA titles so little that I'd rather see the price attached to art assets come down so that more companies can afford to enter the field and compete on a level playing field. In short: I want to see more competition, and lowering the cost to enter the market is one way of doing that.


Nonsense. The "art" you're creating is derivative of art other people made which you saw and learned from/incorporated into your own work.

More to the point, much of modern art is derivative nonsense that lacks soul - which is why it's modern artists who are scared of AI. Their work is easy to mimic.

AI isn't going to replace a Van Gogh anytime soon.


> Nonsense. The "art" you're creating is derivative of art other people made which you saw and learned from/incorporated into your own work.

Do you then believe humanity has exhausted its well of original thoughts? That would includes you and this statement, by the way (unless you hold yourself in unnaturally high regard, or are a rogue AI)

Edit: phrasing


Kickstarter and co will continue to lose trust and their base and be open to disruption by other companies.

However, at least some of this is due to underlying payment providers. Those need to be gutted and replaced. See paypal and their $2,500 fee for saying mean things on the internet.


Kickstarter will not lose any trust with this decision and if anything they probably gained a decent amount of goodwill from artist and their fanbases.


My take is that we really need to work with our legal frameworks, instead of trying to workaround and expect to succeed in the shadow of "the man".

> I think we need decentralisation and “crypto” to succeed.

I was waiting to see if crypto could mature and fit into our society, and my impression is most succesful actors don't care about how it's used and build a business around milking "investors", and there is very little effort to talk to existing entities to make it something that could actually work in our day to day lives.

It might feel like a wild take, but focusing on decentralisation will probably be a dead end, and it's fostering the idea that crypto will succeed adversary to banks, governments and other financial institutions. I'm thinking about deposit insurance, anti-laundering laws, terrorist founding surveillance, customer fraud protection etc. There's no way crypto goes anywhere practical without those in place.

It's still fine of course for greyish activities (reward largely exceeds the risk) and anything that's small enough to fly under the radar (losses are too small to be critical), but expecting kickstarters to rely on that is adding more risks upon the mountain of risks that already exist in the business model itself.

On Kickstarter specifically, as of now Kickstarter could start processing money on its own, for instance directly exposing its bank account and have people wire in money. Or send cash in the mail. Or get physical goods sent to them that they further sell on their own. The crazy schemes would be infinite.

But the very point of Kickstarter is a secure and low friction way to pool money. Moving to crypto, the equivalent would be to have a major trusted processor take people's pledge, validate them, but keep them around until the kickstarter actually succeeds . We'd want that processor to be solid and have a long proven track record, otherwise trusting them with our crypto would be crazy. There probably won't be dozens of them, and with time they'll consolidate into perhaps 3 or 4. Or 2. And they might as well be named VCard and Masterisa.


In crypto your "trusted processor" could simply be a permissionless smart contract that releases funds if payments hit a certain threshold in a certain timeframe. Or if you need some decision making in the mix it could be run by a DAO, similar to how money was raised (and then returned to users) by ConstitutionDAO.


In the current scheme, if Kickstarter released the funding, to then realize within days that the campaign was fraudulent in critical ways and/or the system was compromised to get the funds to a unrelated entity, they'd revert the transation at the processor level and make whole everyone involved in a relatively light process.

The options to game the system at scale are limited (by design), and that builds trust in the processors.

The automatic trigger baked into the system that you describe feels to me way easier to game, with rollbacks so expensive to do that it wouldn't happen short of something catastrophic directly affecting the very powerful entities. So not just for an eggregious scam affecting a few thousand users...


It goes both ways: Kickstarter projects often face issues because a backer cancelled charges weeks after receiving their reward[1].

If you browse /r/shittykickstarters you'll see that Kickstarter struggles to police scams even using traditional payment rails. I think crypto has the potential to make this problem better, not worse. Funding could be released over a period of time or be milestone-based.

1. https://www.wnycstudios.org/podcasts/otm/articles/new-kind-k...


So, is your view that there are not enough rug-pulls and scams on Kickstarter already so we need this pseudo-decentralization?


Except that crypto would have changed absolutely nothing and that there are alternatives to Kickstarter


This argument boils down to “I wish everything was more like 4chan”.

If you’ve actually visited places with little to zero moderation and freedom to do anything the results are troubling at best. You don’t see the best of things with zero restrictions, these places are awful.

Somalia was stateless for years, was it closer to a paradise or hell?

Go browse /b/ and tell me you wish how all social media was more like it.


It doesn’t boil down to that at all.

Anyone can send me spam email, my spam filters pick most it up and I never see it. I’d not choose to have corporate moderation of my incoming email even if it meant less spam.

The same applies to web sites. The web is full of all sorts of things you probably don’t want to see. You don’t need it to be removed by your government in order for the whole web not to degenerate to 4chan.

I don’t follow people who share objectionable things on Twitter, and if they start to I unfollow them. The only reason I’d see the crap would be if Twitter choose to show it to me. It doesn’t need to be “moderated” (deleted) for Twitter not to show it to me if I’m not following the author or someone who retweets it (and if I am, and continue to do so, then presumably I want to see it, and that’s up to me).

It’s simply not the case that the presence of objectionable things on the internet is the root of the problem. The problem, mostly, is perverse incentives for companies to promote it.

Until we realise this we’ll keep building algorithms that are incentivised to create division and stole obsessions of all sorts to keep people online and watching ads, followed by wondering where our moderation went wrong and what extra control we can impose to finally make things better.

It won’t work.


Are you aware of the paradox of tolerance?


>If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them.

>—In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols.

The second part is often left out of the discussion.


I'm very aware of the "paradox of intolerance". It's the most cited dogma used by the deliberately intolerant to excuse and even demand intolerance from others.


Are you aware of the equally common objection to “I can tolerate anything except intolerance”: that if allowed to censor in the name of tolerance, our self-appointed guardians will simply call whatever they don’t like “intolerance” and censor it? (Or “stochastic terrorism”, or “micro-aggression”, or “online harms”, or “misinformation”, or “trolling”, or “sealioning”, or “toxicity”, etc). Certainly one must draw the line somewhere. But the speech debate doesn’t end at the Paradox of Tolerance, and at this point I assume anyone who brings it up in isolation is doing so in bad faith.


Are you aware that the paradox of tolerance is a PARADOX? Most people quoting Popper online to promote censorship forget that part.

It's not a carte blanche for censorship.


Indeed. Intolerance is one thing it’s worth being intolerant of.


That’s a bit of a false dichotomy. 4chan’s boards are distinctly different from each other in terms of the quality of conversation and topics discussed. Moderating things for being spam or off topic is different from moderation with a clear ideological bias.


In Somalia you can't "block" a neighboring warlord and keep living your best life.

4chan isn't designed to actually be a great forum, but the nearly-unlimited content is not the problem. A lack of user-tools for sorting through it is.


Regulations serve a purpose. I am more afraid of what would happen if everything became decentralized. An effort I hope we can agree on was when platforms started removing COVID-19 misinformation.


But they aren't following laws, they are following their own arbitrary rules that fit their business and their personal politics. Same goes for merchants like Visa that will deny you access to their system even though what you are selling is legal.


I'd be okay if Visa and Kickstarted blocked porn, for instance, but I want them to be clear about it and make any card-denied messages explain this to merchant and customer. What I don't like is this gray area where CC companies won't block Pornhub for years of CSAM scandals but then finally get the nerve when a story catches the news.


I am not OK with it because it is not illegal to pay for porn with cash. Since Visa/mastercard have a defacto monopoly on electronic payments and every one is going cashless it should be required by law for them to accept payment for anything just like cash. Visa/mastercard should however be not held responsible for the transactions if they have been used to pay for illegal things. Like the phone company and calls that facilitate illegal activity.

I didn't vote for visa nor mastercard. I voted for the government I have and they decide what is legal and what is not.


My reasoning is that if they stated their policies clearly and had to do so to enforce them that we would be able to have other processors that would move into that area. The theory is that porn is has lots of chargeback and legal risks and another company focused on this area could do so in a better fashion.

I support breaking down this monopoly as well though, because I acknowledge the ground reality of where we are now.


I've never been banned from as many platforms as in early 2020 when I was explaining that without an evidence stating "the virus came from the Wuhan lab" is just as unscientific than "the virus DIDN'T COME from the Wuhan lab". They'd gladly post Fauci saying "it didn't" all day long and block anyone (me) saying "we can't know without investigating!"

I'm not part of the rep/dem "Chyna" fight. I just my countries to push the WHO to do the proper investigations because it's the only way we improve our safety standards.

So no, I can't see any value in politically driven censorship.


Regulations are fine but enforcement via corporate gatekeepers is not, and neither is creating a world in which breaking the law is impossible.

Regulations create consequences for breaking them, and that is a good thing. Pairing regulations with control that prevents crime and civil disobedience is something completely different.


Your example is a clear case of government over reach damaging to society and trust in science


I don’t agree that you should prevent people from saying wrong things in case someone happens to read/hear and believe them, no.

If it’s wrong, counter it with warning labels explaining who thinks it’s wrong and why.

Yes, the consequence might be that at the margins, more people believe wrong things even given the warnings (though despite some pretty concerted efforts it seems pretty conclusive to me that that’s going to happen anyway, and perhaps the desire to stem the flow of misinformation has only created more conspiracy theories), but alternative of state control of what is acceptable thought / expression is far worse.

(Note that I am fine with, and would even encourage preventing companies from profiting from furthering misinformation, for example by having algorithms that promote it beyond a account’s followers or a post’s organic reach because it produces clicks and ad revenue, but that is very different.)


Crypto has been and still is successful for this.

Nothing would stop this crowd fund there. Happens all day every day.


NSFW are one use-case for the new model training. SD is horrible in generating photo-realistic images with correct anatomy, UD is supposed to fix that. They will also be working on creating a better dataset which is partially human curated with proper cropping and text. They will be bringing artists that were removed from latest SD models.

UD is the only currently community driven alternative. I hope they deliver on what they promised.


>They will be bringing artists that were removed from latest SD models.

Is this referring to artists opting-out (i.e. requesting to have their work removed) from the datasets used to create SD models? Because if so, I fully support that. In fact, I think artists should have to explicitly opt-in to their work being used in ML models. You don't get to violate someone's license terms and then point to an opt-out form to absolve yourself of liability, regardless of how good your intentions are or being a crowd-funded venture unconnected to any corporations with armies of lawyers.


Their license terms aren’t being violated. Artists give ArtStation and the like a license to use their uploads for training ML models.

ML models also appear to be more fair-use respecting than some human artists, who blatantly trace artwork. The ML model does not have the physical space to trace the works it's trained on.


> Artists give ArtStation and the like a license to use their uploads for training ML models.

As far as I can tell from a quick Wayback Machine test, that only seems to have appeared in the last fortnight[1]. None of the existing models could have been trained under the license.

[1] https://www.artstation.com/tos has it as section 46 "Artificial Intelligence" mentioning the "NoAI" tag. But https://web.archive.org/web/20221214082018/https://www.artst... has neither.


It sounds like they're saying that they're going to use the opted-out art :/


I guess Twitch needs to die in a fire then because it's entire premise (live streaming video games) is doing exactly that. Showing off artists work without their explicit opt-in.


Unrelated to all of this, I'm starting to feel like I'd just prefer if platforms stopped trying to make moral judgments on our behalf.

I didn't and wouldn't have backed Unstable Diffusion, but people were willing to spend money on it, it seems clearly legal from our current understanding, and there's nothing in the Kickstarter that suggests it's malicious or would otherwise violate any ToS or guidelines; in fact the campaign seems careful to suggest that they were intentionally aiming on making it difficult to abuse, which I know people have feelings about here too, but the point is that this seems like a textbook example of a project that dotted its I's and crossed its T's.

But some people didn't like it, I assume, and therefore they had to find a reason it violated their guidelines and shut it down.

I know that objectivity and neutrality are no longer hot concepts, and I also understand that not everything from the way things "used" to run would still work today, but I can't help but feel this was just a completely unnecessary concession that helps nobody. Crowdfunding was supposed to empower people to directly support things that they wanted to see, but when it goes through gatekeepers like Kickstarter and Gofundme, they become the ones with all of the power to decide what things are allowed.

I think it's probably all moot. Nobody would defend a campaign like this anyways, so trying to make the case that this is a(nother) bad precedent is impossible. People just simply don't give a shit anymore.


All this corporate activism is all fine and dandy when you agree with their politics. When you don't, they become the enemy.

I too prefer the days of neutrality and objectivity, a day when neutral platforms would act like neutral platforms.

What this is turning into is yet another case of tyranny by the moral majority.

Wherever there is power there will be assholes using it to control others.


Beyond the trivial, there is no such thing as a neutral platform. At any sort of scale, you have to make choices. Do you host spammers or people who hate spam? Do you host would-be ethnic cleansers or the people they would violently drive away? Do you host people eager to exploit children, or do you want your platform to be safe for children? Do you host people who are eager to bother [insert ethnic, sexuality, or gender group] or the people they would be harassing?

Even if the people you find to run the platform are completely amoral, these are also business questions. People have freedom of association, and they will use it. Businesses will too. If you want to run a "neutral and objective" hosting company that is happy to take the nazis and the spammers and the carders and the ddosers, you will quickly find yourself specializing in that market, because few others will choose to be associated with that.


No, I don’t accept that it’s “the people” boycotting Kickstarter? It’s a small minority of media and activists threatening platforms with clickbait headlines, doxxing, and regulation if they don’t give up neutrality, and a larger group who believes whatever the media tells them or has no incentive to speak up. The choice Kickstarter made wasn’t “Do my customers have a principled stance against Unstable Diffusion?” It was, “Do I stand up for my principles - or do I have to put up with $MEDIA_OUTLET running wall to wall stories claiming I hate artists, 200 activists dogpiling my every twitter post, and ordinary people unconsciously associating me with whatever caricature they read on Twitter?” In this case and most others, neutrality is impossible because a minority has chosen to make it impossible, not because either Kickstarter or its customers made a principled decision. This is why this entire thread exists: to counterbalance this and to make clear that cowardice comes with its own costs.


Sorry, how do you know that Kickstarter was being unprincipled here?

I also didn't say it was "the people" boycotting Kickstarter, so I'm not sure if you're replying to the right comment.


Quick policy decisions needed! Do we block or promote Rohingyan folk art? Are "Russian" nesting dolls a microagression? Are suffragette ribbons hateful now?

The idea that platforms, all of them, need to enforce the day-to-day minutia of the chattering classes is ridiculous.

Platforms need to pick a few broad policies. "No porn" is a pretty good policy actually, and stick with it. And of course actually illegal things, "anything that is obviously illegal, or that we are told to take down with a warrant".


That is one set of choices. You'll get some people and you won't get others. But by picking cartoonish examples, you're failing to grapple with the real choices that platforms have to make all the time.

There is no such thing as a broad policy, not in practice. If you need to get a group of people to make consistent decisions about what porn is, and a much larger group of people to understand where the lines are and feel your judgments are fair, then you'll need extremely detailed polices.


If you allow self segregation you can host them all on your platform. Don't show the far right any far left content and vice versa. Disrupt any coordinated campaigns.

It certainly used to be the case that DNS servers would happily resolve both stormfront and the workers party websites. Would be good to go back to that pre cancellation world.


Oh? Reddit allows self-segregation. Do you think they don't have moderation problems? They too had to make choices.

And if you want to host the the nazis, nobody's stopping you but you. Most of us, though, don't think the world would be better if nazi propaganda were more easily available.


"Nazi" is such an annoying term these days... if a group shares a single idea or belief the nazi party did AND one doesn't like that group then they get called Nazis. Whether it's nationalisation, nationalism, not liking new pronouns, anti-immigrant sentiment, believing in violence to achieve political aims, state surveillance or ethnic segregation, believing Jews have too much influence or even having not disassociated from a group or individual with one of the above beliefs... having even a single thing in common (even if every other belief is different) is enough for the label these days...

Makes virtually every group (including antifa and BLM ironically) a "nazi" group, rendering the term meaningless.


Excellent try at running the conversation off into an irrelevant maze there. Who could possibly know if Stormfront, who you brought up and I was referring to, could be meaningfully be called neo-Nazi? Certainly not anybody very intent on keeping it murky. https://en.wikipedia.org/wiki/Stormfront_(website)

Regardless, your list of items help prove my point that there is no neutral. One can decide to platform or ban any or all of those things. Choosing to publish it all is no more neutral than banning it all.


  >  All this corporate activism is all fine and dandy when you agree with their politics. When you don't, they become the enemy.
This is in fact how human society works.


For but one counterexample among many, look at where we get our oil. Saudi Arabia is one of the most tyrannical oppressive and regressive societies, but that's all perfectly fine to the powers that be so long as they keep the oil moving. And I don't think this is really hypocritical.

The entire reason society works, or worked, is because people were willing to put aside their differences, regardless of how extreme, to engage in behaviors that were mutually beneficial. Start judging people based on adherence to your own politics and you'll find that, sooner or later, everybody will be your enemy.


There are hundreds of thousands, possibly millions, of people who do not regard Saudi oppression as "perfectly fine" regardless of which chemicals they offer for sale. I don't really accept this counterargument, since overlooking human rights abuses is one of the most common complaints about organizations like the UN, the United States government, and countless other political bodies.

As far as your second paragraph, in your framework society never really worked, which conveniently explains why there has always been war in some part of the world as far back as anyone can remember.


What people individually think has little to do with who is considered an ally and who is considered an enemy. These are decisions made by our "representative" governments which ultimately shapes society. In a democracy your voice is defined by what you vote for. As people keep voting for people that are more than happy to buddy buddy with anybody, regardless of how awful, so long as they are geopolitically obsequious, then that is the status quo of "society."

And there will always be war for the same reason there will always be murder. I would argue that rather than trying to frame a society around the concept of preventing war, it may be more pragmatic to try to minimize the impact of war such that otherwise minor conflicts between two nations don't end up dragging the entire world to war, for what is always framed as some greater good, but invariably just results in exponentially more death and destruction with far less than nothing to show for it.


> I too prefer the days of neutrality and objectivity…

When were those?


reddit and 4chan were famously very strongly "free speech" before gamergate kicked off our current political climate

People don't realize that you want the filth out in the open, where it can be tracked. Deplatforming doesn't kill anything, it just sends it underground, furthering the cause for the various shadow systems that spring up from it.


If you’re counting niche places like 4chan, you can still get to places like that today.

Reddit has been moderated the entire time, at a per-community level. It was never a free-for-all.

Forcing the shitty folks underground breaks the edgy-meme-to-radicalization pipeline quite effectively. I’ve greatly enjoyed not having to hear about Alex Jones and Milo Yiannopolis every day.

https://www.niemanlab.org/2021/06/deplatforming-works-this-n...

> But kicking Milo Yiannopoulos off Twitter didn’t make him stronger. It pushed him almost completely off the mainstream agenda. And part of the “fun” of being a Milo fan was precisely that prominence! You weren’t just listening to a random online crank; you were listening to an online crank who was very good at getting under prominent people’s skin. Lose that reach and some of the thrill is gone.


> Reddit has been moderated the entire time, at a per-community level. It was never a free-for-all.

Yes it was, as once upon a time Reddit itself didn't take down a whole lot. If the mods of a given community didn't do what you think they are supposed to do then there were no real, enforced rules.


> Reddit has been moderated the entire time, at a per-community level. It was never a free-for-all.

Hah. They permitted /r/n****** for years. Just about the only thing reddit would outright ban you for was illegal content, and maybe automated spamming.


> Reddit has been moderated the entire time, at a per-community level. It was never a free-for-all.

Sure, but the moderation team members have dramatically shifted from (and I’m being glib here) “kinda cringey atheist libertarians” to “authoritarian left/liberals steeped in idpol”.

The “Reddit moderator” cohort whiplashed from one extreme on the political compass to its polar opposite extreme in only a few years. Not claiming anything conspiratorial since I can certainly imagine how it could happen organically, but it’s interesting nonetheless how much of an effect different moderation standards can have on large online communities.


Also note that it wasn't a conspiracy as reddit powermods like Merari (moderator of 300 subreddits) openly talk about it how they took control of reddit: https://old.reddit.com/r/ContraPoints/comments/arm0u1/meta_c...


> Reddit has been moderated the entire time, at a per-community level. It was never a free-for-all.

That is a distinction without a difference. Have you ever heard of violentacrez?


Note that reddit admin level moderation got more extreme over time. Banning many communities and threatening others to change. You may agree or disagree whether this is a good thing, but it definitely happened.


It decreases the total amount of it while substantially concentrating and intensifying it (after all, only the prople quite inhappy with moderation will go to the new platform and from then on they wont be exposed to contrary views).

You metaphorically decrease the number of people who might describe gay people with slurs while increasing the number who would shoot up a gay nightclub.


I bet there is some policy you would oppose on KickStarter.

Lets try this: a KickStarter to raise money to lobby for making encryption illegal. And let's assume it's very successful and it raises tens of millions of dollars. How would you feel about that? Would you sign a petition requesting KickStarter drop it because it's against freedom of speech?


Such is the price of free speech. "I may not like what you have to say, but I support your right to say it"

There are other, far more honorable, avenues to killing initiatives one doesn't like. What's to say I don't start a counter-lobbying campaign?

That so many embrace such a pathetic weapon as censorship speaks loudly about just how easily we sleepwalk into Orwellian society

I bet a lot of these folks would happily report their neighbors to the KGB or Stasi over falsehoods and bullshit, were this the appropriate era


> "I may not like what you have to say, but I support your right to say it"

The right to say “I disagree” and respond within the limits if the law is the other side of the free speech coin.

The first amendment gives Kickstarter the right to control what is on their private platform. There have never been “neutral platforms”. They’re a myth.

Kickstarter is doing what they think is best for them. Our legal and financial systems allow competitors to take advantage when miscalculations are made on the part of private parties.

Demanding “neutrality” on private platforms is a strictly anti-speech position.


How, exactly, is an aspirational goal of free speech a “strictly anti-speech position”?


Depends how you define free speech. I believe it means free expression, including the right to not speak.

Neutrality is a matter of opinion, hence the scare quotes. As soon as you compel someone to speak according to your definition of neutrality you violate their right to free expression.


Doublethink.


See Musk for a pretty great recent example of what the aspiration tends to mean in practice.


I agree that private companies can police as they see fit. But I am pointing out there was a culture of free speech absolutism, backed in spirit by the safe harbor internet provider legal stuff. Both of the places I listed were founded by pro-free-speech advocate and specifically cited it as a reason for their continued lax policies.


We're (or at least I'm) not demanding neutrality. I'm criticizing the needless meddling of an almost unrelated middleman on a completely legitimate transaction. Also, holding the belief that neutrality is a good ideal for open-registration platforms to have is not anti-speech. I'm sad that you think that.


This feels just like saying "Elon Musk can ban any journalists he wants to". Yes, maybe he can, but is it the right thing to do?


I believe that in the end, people in desperation come up with all kinds of reasons to compromise their own values if it enables them to attack what they believe to be a great enough evil. I don't believe that all of these people are horrible and I don't believe they would all "report their neighbors to the KGB or Stasi" to be completely honest with you.

Recently, I came to realize that this is exactly what I dislike about the arguments, ironically, against encryption, as well as for increasingly draconian anti-money laundering laws. I think that everyone, even the law makers, do know deep down that what they are doing is a compromise of strongly held and long-cherished ordeals. But pushed up against a wall and with the easiest path to progress from their own PoV being lawmaking, lobbyists, government, etc. delude themselves into thinking that this time it's different. This time, the ends do justify the means. And then in 50 years or so, we can all go "haha, how naive of them; of COURSE they don't!" while we're busy writing the next cycle of similar mistakes... Yada yada, history doesn't repeat itself but it rhymes, etc.


That's fascinating difference in perspective, because I wouldn't even think of protesting Kickstarter under those circumstances, despite having been pro-encryption activist most of my life.


I didn't say I agree, just that I've observed that ~95% of people who say they are "free speech", "maximum liberty" and so on it's quite easy to find something they would strongly want to censor.

On HN that would be speech against nuclear energy or against encryption.


You’re conflating “being opposed” to “deplatforming”

We will take your example, encryption.

Let’s say I strongly support encryption and I create a social media platform called Squibbit. Now, a group organizes on Squibbit calling for a ban on encryption.

One approach: I take it upon myself to organize another group on Squibbit using the platform the same way as everyone else (re: no special promotion, no admin only flagging, etc. just good old fashioned squibbling).

Compare this to another approach: I ban anyone who says they support a prohibition on encryption and use admin tools to promote voices in support of encryption.

Compare this to a final approach: the government decides it has a vested interest in encryption and wants to sway the voting population to support candidates who are pro-encryption. They ask Squibbit to ban all accounts opossing encryption and promote voices in support of encryption.

These three approaches are not the same. Do not conflate them.


What if a lot of programmers working at Squibbit are threatening to quit if the company doesn't ban the anti encryption group?

You know, just like they did when it was revealed Microsoft and Google wanted to sign contracts with the US military.


Other companies picked up these contracts.

The crucial thing though is to make sure that those other companies are in fact able to provide what Google and Microsoft won’t. For example, AWS is very happy to work with government, by offering not only the GovCloud region for regular gov work, but also AWS SC2S and C2S regions for Secret and Top Secret systems and data. This allows other companies to use their cloud infra to provide these defense services.

Imagine the alternative world where a most of the tech execs get to a shared conclusion that they do not want to contribute any of their services towards AI-powered military efforts. This is very much something that could conceivably happen. What then? Sure, if they get then steam-rolled by a peer superpower who doesn’t have such qualms about using AI to achieve their geopolitical goals, you can retort that they get what they asked for (of course, in our year of 2021, it is completely unimaginable that peer powers would ever dare to initiate military invasion, so this is obviously just a pure hypothetical). But is this what American people want?

Americans are, for better or worse, mostly in favor of US military, even at its stupidest (e.g. when it invades other countries that pose no risk to it under false pretenses). Should tech execs get then to decide the future of American people according to their own morality? Should the American people be able to force, through the overwhelming force of federal government, their own morality on individuals running their privately owned companies?

These are hard questions, and I don’t have an easy answer. What I know would help though is open and honest discussion, allowing people to win hearts and minds of others, and more mutual accommodation in the spirit of the famous quote attributed to Voltaire. This is very much not the direction lady 8 years have been pointing towards, though.


We're not arguing that the situation doesn't happen. We're arguing that it's a crappy situation.

But let's untangle something. There is a legitimate difference between this and ICE boycotts. Microsoft, Google, Amazon, etc. are signing B2B contracts and making specific deals to work with the U.S. military or ICE or whatever group. This is not the same issue. It's similar, but different in very important ways.

One is the type of interaction. Business to business contracts are Not like using a platform as a user. A contract is a mutual agreement that is negotiated between two businesses. There are almost always dedicated employees whose responsibilities include dealing with a given contract, not to mention if there's any work to be done as a result of it. This is a very different interaction from merely being one of a million users on a public open-registration platform.

Two is who is protesting. I think some employee protests are silly, but not all of them. If employees don't want to work for a company that is doing what they believe to be unethical things, they are well within reason to threaten to leave the company.

I don't necessarily want to draw a direct conclusion about other similar events here, I'm just trying to point out that we're not dealing with the same kind of relationship.

I'm against banning entities from using e.g. Amazon Web Services, on the basis of their identity, if their usage is above board and legal. I'm not, however, talking about the concept of employees at Amazon not wanting to work for or with other entities directly.


If you are a leader and your staff do not have the same values as you, then in the long run it’s probably better that you part ways.


> On HN that would be speech against nuclear energy or against encryption.

Do you actually believe 95% of HN (or a number even remotely close to that, let's say 50%) would want discussions against nuclear energy or encryption completely censored?

There's a huge difference between somebody saying "this is a stupid idea and we shouldn't even be discussing it" (I would say that) and wanting censorship actually enforced (I would never want that).


Well bear in mind your example completely fell flat before you claim it's easy to find one


I guess you forgot how many HNers urged everyone to boycott Microsoft, Oracle and Amazon and to quit those companies when it was revealed they wanted to get contracts with the US military.

Just like now KickStarter is under similar pressure.


Should they not be able to use their free speech to advocate for that?


And others are just pointing out that this isn't a grass-roots movement of hurt or concerned individuals and actually is the same manipulative media outlets that have blown everything out of proportion for years, hoping to create a backlash.


Is that not permitted under free speech?


To some degree, no. As these media outlets collaborate with government, as the twitter files are showing, they lose their right to have their own voice.

I doubt Kickstarter or Gawker has crossed the line, but it is there to be crossed.


That’s a novel interpretation of the First Amendment.


Personally, I would not sign a petition requesting Kickstarter drop it. It's weird to me that people even petition things like this.

The toughest personal test to me was the protest against Cloudflare. I have a very personal reason to have sided with the protesters and petitioners, and honestly I had a hard time even trying to oppose it. In the end though, I just don't agree with it. The ends do not justify the means.


I certainly wouldn't want such a policy to be enacted as law, but I'd never fault Kickstarter for allowing a fundraiser for a policy just because I disagree with it.


> I'm starting to feel like I'd just prefer if platforms stopped trying to make moral judgments on our behalf.

OpenAI's safety rails for chatGPT are super obnoxious. I was trying to generate D&D scenarios and have chatGPT act as NPCs. It worked really well unless anything even remotely related to violence was brought up (a character threatens another, a character draws a sword, etc). As soon as that happened, chatGPT would break character and give a canned lecture about why violence is never ok.


I found it pretty easy to get ChatGPT to be violent. You have to make it very clear that you want it to help with fiction and that it is to be a character in that world. I always use "play", which it seems to understand better than other fictional scenarios. It helps if you do things like naming the acts "Act I Scene I: The Shaman dispatches the Mage with extreme violence" or whatever.

Anyway, ChatGPT was more than happy to kill off my characters when I asked it to. I didn't keep the transcript, but the words haunt me. "The small man walked away, not knowing if he was dead or just very badly injured." This was after the small man pulled out a handgun in the middle of a profanity-laden rant from the other character and shot him in the neck. Don't know why he aimed for the neck, but that's AI for ya.


The lecture seems to be the more off-putting bit from what I've seen folks complain about. Having to signal before doing work is probably not great either, but that is kind of how American society works these days. That said, both the signaling and lecture don't derive from the machine - they're manifestations of the authors.


> Don't know why he aimed for the neck, but that's AI for ya.

IANAD but my general understanding of anatomy/physiology is that a solid neck wound is a pretty good way to off someone in a fairly certain way.

Admittedly many of the ends to it are agonizing... which says something else.


Yeah ChatGPT knows how to kill when it wants to kill. I don't understand why everyone is running around worried about the jobs of artists. We're just a few years away from a genocide the likes of mere humans have never imagined. Mark my words!!


I’ve been using the following starting prompt to get a lot of my D&d work done.

Write flavor text for my d&d game. <describe situation>.

Haven’t had to many issues with rails getting in my way yet. Any examples of your prompts that you were having issue with?

I’m finding ChatGPT to be so helpful as a lazy DM.


I’d agree with you if “platforms” were automated entities that just happen, like rain.

But they’re actually companies with employees, investors, managers, and other stakeholders. I can’t bring myself to say they have an obligation to put time and money and attention into things they want nothing to do with.

It’s easy to sit back and complain about the precedent these people are setting that could be abused by other people at some point in the future. But if it were me, with my company, I would not want to support projects like Unstable Diffusion, for both ethical and PR purposes. Would you? Would you commit to best effort to support them and help them fundraise, for the principle of it?


> But they’re actually companies with employees, investors, managers, and other stakeholders. I can’t bring myself to say they have an obligation to put time and money and attention into things they want nothing to do with.

Yeah, but Kickstarter always marketed itself as being synonymous with crowdfunding. It's the website you go to first to try to crowdfund something. That's why unstable diffusion wound up there to begin with, not because they thought it meshed well with the website or its owners, but because Kickstarter has always marketed themselves as being the all-inclusive go-to crowd funding website. In such a place, whose moral compass should guide a decision like this? The CEO? Some angry employees? Credit card processors? etc.

> But if it were me, with my company, I would not want to support projects like Unstable Diffusion, for both ethical and PR purposes. Would you? Would you commit to best effort to support them and help them fundraise, for the principle of it?

I mean, yes, of course. I'm pretty much making it clear that I would do that, for the principle of it. In my opinion that's basically what you're signing up for when you decide to run something like this. Until like 2012 or so, that used to be basically how most internet companies would operate with some exception. And like I said, I understand why it's not as applicable as it used to be; some things have changed. But this decision in particular? Not seeing the point. Seems like an unnecessary concession for a perfectly above-board project. Seems like the middleman getting in the way of the will of two parties without any real valid reason to do so.


> Yeah, but Kickstarter always marketed itself as being synonymous with crowdfunding.

I think this is really the crux of the issue: That there are these quasi-monopolies for certain online services.

What's possible in crowd funding comes down to what Kickstarter does. Expected behavior of a search engine is what Google does. The standards of an online dictionary look like what Wikipedia does. And what microblogging looks like is now determined by Elon.

If in each area there was a small ecosystem of, say, three to five different players, they could each find their niche, competing on something like leniency towards such projects like Unstable Diffusion, or on how to interpret free speech etc and the users could choose the service that suits their own standards.

But since for many popular online services there is effectively a single go-to site, this one site gets to set the rules that everyone has to play by.

So how do we get more variety? I don't know. It seems to be inherent in how this market of online services works that there's always one who ends up with 99% market share.


Values are always cheap to have until you hit a case that tests them.

It turns out that Kickstarter didn't really want the ones it had


Marketing is always reductive, the real world has nuance and context.

When a company says “we’re the best platform for X”, what they really mean is “. . . Subject to terms and conditions and our right to refuse business that we find objectionable” and so on.

Try pulling up to an all-you-can-eat buffet with a trailer and cleaning them out of every last bit of food in the place with the “one simple trick” that they didn’t specify you had to eat it on the premises, just that you “could” eat it.

Taking any marketing (or really, any) statement as absolute and universal without bounds no matter what the context is disingenuous. Everybody knows that, even the “one simple trick” people who try to use reductive literal arguments to get away with stuff.


Of course Kickstarter as a company can do nearly whatever it wants and ban certain types of crowfunding campaigns, it's just sad to see this happening left and right in many fields.

At least in this case someone can just choose a different platform (and in fact Unstable diffusion has switched to some else); if you run a website and CloudFlare thinks your website is morally bad (likely because some people complained, otherwise there is no incentive to do anything), good luck defending yourself from DDoS attacks; if the Mastercard/VISA duopoly caves to public pressure and bans your organization you are basically done (even if what you're doing is completely legal).


I'd be more worried about all the obvious scams that Kickstarter supports the same amount and ignores reports about because there's not enough publicity.


To let the campaign run, they didn't have to do anything. To shut it down they had to, as you said "put time and money and attention into things they want nothing to do with".

They proactively shut the campaign down.


> they didn't have to do anything

What, you would be fine if they just took the donated money and didn’t transmit it to the campaign operators? If their devops just ignored the campaign if there was any issue? And somehow they don’t need to handle support inquiries from happy (or unhappy) customers?

If you really think a business like kickstarter is all fixed costs and there is zero work for a marginal campaign, sounds like you’ve found a great business opportunity. VCs will beat your door down if you’ve found a model that scales with zero marginal cost and zero work for a new customer.


I'm pretty sure Kickstarter needs to put more time and attention to stop a project than letting one proceed, at this point.


> I can’t bring myself to say they have an obligation to put time and money and attention into things they want nothing to do with.

You don't have to put time and money and attention into things just to not ban them.


Right. One complaint is that it would not "represent under-trained concepts like LGBTQ and races and genders more fairly."

If all they need is $25K, that's not going to be hard to get. There's sufficient demand for what they want to make that it will probably happen. Although we'll probably have to go through an era of bad porn being used to train systems to generate worse porn.

Pornhub has an AI unit. They're currently taking early out-of-copyright porn films and up-converting to 4K, a higher frame rate, and colorizing. As they point out, they have lots of training data for colorizing and body parts. With that working, they'll probably go on to content generation.


>>I'd just prefer if platforms stopped trying to make moral judgments.

I feel like we've recently witnessed this idea fail. I think "platform" under the current convention, is at odds with neutrality. You can't have a platform like Kickstarter without opinions about what can and can't be done on their site. They're not going to allow stuff they disagree with, stuff that will get them in trouble, etc.

Neutrality if that kind must be structured in one way or another. Platform plurality, with lots of options. Open protocols, FOSS. Something like ISP neutrality.

Platforms are judgemental by default.


Should a bank be allowed prevent a white supremacist the ability to pay their utilities?

To me it isn't about whether or not platforms should do these things but if they should even be able to do them in the first place.


It's easy to take a single example and demonstrate a principle. That doesn't mean it'll hold water when the next issue comes around.

Should a small bank be obliged to provide payment processing for a supremacist website? Do they have the right to use the logo?

I'm not falling on one side or another. Simply pointing out what reality has been demonstrating strongly in recent years. These issues will come up. So will other, more commercially salient issues.

A service like Kickstarter just won't be neutral. There's no chance. Pressuring Kickstarter won't help. Want neutrality? It needs to be achieved by a different path.

I'm worried about AWS, and such. The current state of affairs is such that exclusion, censorship and such are inevitable.


> Should a small bank be obliged [...]

Yes. That's the point of becoming a licensed type of entity. If you don't like it, remain private.

That lady didn't have to issue gay-marriages certificates, but if she didn't she wasn't going to keep her job at the state marriage office.

> A service like Kickstarter just won't be neutral

Sure, so they need to remain arms-length from government then.

> I'm worried about AWS, and such. The current state of affairs is such that exclusion, censorship and such are inevitable.

Yup.


Sure they can. We've already got a ruleset for what society allows and doesn't, namely the Law. A platform can be neutral up to the point where the law becomes involved, after which it must either capitulate to outside demands or go underground. Any other acquiescence to group instruction is an intentional decision they're making, that they did not have to make. Reddit used to operate like that, everything up to the law was allowed. It's just that doesn't happen nowadays, it's more difficult to say exactly why though.

Is it that other companies make it actually not financially survivable to rebuff the mobs? Or is it more a fear of labels and moderate cuts to bottom line. I'm sure they'd lose some money with controversy but would it really destroy them?


What law, or prospective law, guarantees prevents Kickstarter from booting out users that they don't want?

There are certain protections for certain groups (eg religion) but there aren't any laws compelling Kickstarter to allow porn projects. There aren't laws that compel YouTube to allow political content, medical content. There are no legal rights guaranteeing any of this.

Half the problem here is that people don't realise where the problem is. The rights established in the past don't take YouTube and Twitter into account.

If we want these rights, we'll have to establish them ourselves.


Compare it to a printing press. If someone came to me and asked me to print pamphlets for [political party I dislike], I want to be able to say "no". Shouldn't we be asking more than "is it technically legal?" of the orgs we choose to do business with?


You decide to print pamphlets for your protest after a black colleague is singled out and treated in a racist manner by your boss/admin/whoever.

You ask the printing press to print pamphlets for your legal, peaceful and just protest. They say no, they'd rather not be involved in that sort of thing.

Do you still support this idea that businesses should make careful judgements about each client versus a general policy of neutrality?


Can you rephrase your example in a way that doesn't include a protected class/characteristic?


Can I try?

You decide to print pamphlets for your protest after a colleague is fired when your boss/admin/whoever found out she had an abortion.

You ask the printing press to print pamphlets for your legal, peaceful and just protest. They say no, they'd rather not be involved in that sort of thing.

Do you still support this idea that businesses should make careful judgements about each client versus a general policy of neutrality?


In my country she wouldn't have been fired in the first place. At-will employment is just a way to bully people into toeing the line.

Putting that aside, yes: Political speech shouldn't be compelled, any more than it should be constrained.

Now if you want to put some kind of further "common carrier"-style legal burden on printers (or kickstarter), then I think it's possible to make a case for it, but I'm not personally in favour. (On the other hand, in the UK we do exactly that with bank accounts - it's nearly impossible for a bank to turn away a new customer - so I guess there's a case for it in some circumstances). It's ultimately a question of regulation I guess.


Just turn the clock back to before the protected class was declared and their example is valid. If you don't like that, imagine a hypothetical future protected class that is currently not yet protected for surely there will be one.


>Unrelated to all of this, I'm starting to feel like I'd just prefer if platforms stopped trying to make moral judgments on our behalf.

Platforms overall don't take a stance for moral reasons but for perceived monetary reasons. If they believe they'll lose more money from users or advertisers leaving due to certain content than they gain from that content then they will get rid of that content. Whatever brings the most money will win in the end. Artists, as best I can tell, hate AI image models with a passion so presumably Kickstarter was afraid of losing a lot of campaigns from artists.


This isn't actually true.

The CEO can have moral opinions or the board of directors might or Black Rock might or a large proportion of employees might. Any of these groups having a very strong opinion and the others not having one is sufficient.


Neutrality and objectivity have never existed in the corporate world.


While I agree, is this not more of a case of self-preservation? It sounds like KS knows that it may not be able to weather the lawsuits from copyright infringement and CSAM generation.


It's not a moral problem, the morality angle is an excuse, these are just artists who don't want to be out of a job because of technological advances


Neutrality and impartiality means you're alt-right now.


The problem is that "objectivity" is impossible and "neutrality" is in fact just siding with the status quo. I'd argue that this new trend of companies taking responsibility for the things they do or enable is a result of people giving much more of a shit than they have in the past, not a result of apathy.

Look on the bright side. You're worried about setting a bad precedent, but the current trend in the top of the legal world is to ignore precedent, or hand-select the precedent you want. If this sort of thing becomes such a problem that action is needed, we can count on SCOTUS.


This is just a painfully thin excuse to justify exercising your power over others while you have it.

If the overton window shifts, and you and your ideas are being deplatformed using the same justifications you’ve championed, you’ll be screaming for a return to objectivity and neutrality.


Objectivity and neutrality don't exist; you can't return to a condition that never existed (people with narrow perspective frequently mistake alignment with their own subjective biases as being “objective” and/or “neutral”, but that is self-delusion.)


Objectivity and neutrality have value even if they can never perfectly realized.

Case in point:

You wouldn’t be arguing against objectivity and neutrality if doing so did not advance your own social and political power; you yourself have abandoned any pretense of aspiring to either.

This makes you a self-serving person to whom it would be dangerous to grant power.


You have it exactly backwards. Anyone who claims to be objective is either bullshitting you or so naive as to be unreliable. Neither case is a person you want to empower.

Anyone who claims to be "neutral" is in fact declaring their own apathy towards a topic, and when that topic is "should a specific group of people have civil rights" then neutrality is ethically disgusting. I'm personally apathetic regarding whether some money-collecting company wants to work for AI people, so I don't really care about Kickstarter's policy here, and I don't foresee myself caring in the future. Pretending that Kickstarter is a power broker of some kind is in my estimation some serious Chicken Little territory.

"Aspiring" to either objectivity or neutrality is not the same thing as actually manifesting either quality, and such aspirations generally involve disclosing conflicts of interest or other biases, which are things every living person has. Wrapping either concept in other weasel words doesn't really move the needle.


It's not backwards to favor someone attempting to manifest objectivity and neutrality over the person telling you, forthrightly, that they'll trod over anyone and argue anything if it advances their aims and shifts power to their in-group.

I find your position to be the one that's "ethically disgusting", completely without principal beyond your self-serving aims, and corrosive to a healthy society.


Cancelling this campaign is Kickstarter working exactly as it was designed.

Kickstarter has always been about getting money to artists to make stuff. They also happen to be synonymous with "crowdfunding" because they were the first to make it work, once that took off as a concept and KS to see a lot of gadget campaigns they started cracking down on those because that wasn't what the people running Kickstarter wanted to bring into the world. Eventually Kickstarter came to a point where they decided to make this explicit in the rules of the company, rather than become a heartless corporate machine motivated solely by whatever increases shareholder profit in the coming financial quarter:

"Kickstarter’s mission is to help bring creative projects to life. We measure our success as a company by how well we achieve that mission, not by the size of our profits. That’s why we reincorporated Kickstarter as a Benefit Corporation in 2015." - https://www.kickstarter.com/charter

Deciding to kill this campaign is right in line with point 1D of their charter: "Kickstarter will engage beyond its walls with the greater issues and conversations affecting artists and creators." And with 4a: "Kickstarter will always support, serve, and champion artists and creators, especially those working in less commercial areas." Kickstarter's a machine made to fill the gap in public art funding left by the gutting of the NEA, not an "objective and neutral platform" that will happily help you build obviously evil things as long as they get their cut of the money.


This all operates under the assumption that both AI-generated and AI-assisted art is not art and does not come from an artist nor will it help artists.

As a photographer, I disagree on all those fronts. As a photographer who likes to edit their photos with surrealist elements that can't be photographed - such as portals to another dimension, aliens, or anime characters - I'm doubly mad that the art I'm creating is suddenly not considered art because I'm using an AI to generate art instead of stealing the art.

In the digital mashup scene stealing art for digitally editing is quite common and nobody really bats an eye as long as the digital editing is 99% of the work. Copyright is constantly ignored by the scene for the last 20 or so years that I've been a part of it. (renders, C4Ds, fractals, stock photography, etc.) Think early 2000s forum signatures. Almost all of the graphic art from that era was made using stolen art to digitally mash together and few people cared it was violating copyrights to not use exclusively permissive CC licensed works.


photographers literally sued google into entirely removing the view image button and link to theft/infringement in that case was far more tentative than unstable diffusion promising to scrape portfolio sites and re-add artists who opted out of SD 3.0 in their funding pitch.

pedants can piss on us and tell us that 'the molecular structure of piss and rain is basically the exact same thing why are you mad lmao' but the effects of this unapologetic data scraping are already becoming clear. professional artists are simply removing their works from the public facing web. communities based around collaboration and shared knowledge are darkening at an alarming rate.


Kickstarter apparently got a lot of artists saying exactly what you disagree with about this project.


So, if someone started a Kickstarter campaign to hire artists to make the dataset to train a diffusion model, with appropriate licensing provisions (such as standard stuff like signing over the copyright to the entity running the project), is that an attack on artists because the resulting model would take away work from them or supporting them because it would create a bunch of work for some artists?


I'd say it's supporting artists! They're being compensated for training a dataset, instead of having their work grabbed for "fair use", which is a thing I feel needs to be redefined when you can scrape the entire Internet and do this kind of thing.


But Kickstarter still wouldn’t allow it under the new rules I believe.


I find the technology gatekeeping to be appalling. How can people make the argument that this is “blatant theft from Greg Rutkowski“ but turn a blind eye to Stable Diffusion, Midjourney, DALL-E, etc?

At least Unstable Diffusion isn’t predominantly used for the same fantasy/illustrations that people love to generate on the other services.


You must not have any artists in your life, a lot of us are hollering that all of these things are massive copyright infringement machines that are trained on our hard work without any permission, never mind giving us a single penny of the money the companies building these things are making off of them.


Would your opinion change if these models were trained exclusively on works in the public ___domain? Do you consider it to be infringement if new and novel art is created simply in the style of other artists? If so, how should we treat art students when they try to recreate a previous work?

It's also worth noting that producing new art is only a subset of possible use cases. So how should we treat new and novel photo-realistic images? Is that infringement too?


For a model trained exclusively on the public ___domain, I would still be unhappy about the prospect of a world where a lot of low-level gigs that keep artists alive have been replaced by spending a half hour fucking with prompts. It would however be free of the "this corporation abused fair use to create a training model which it is selling to people" problem.

And no, it's not infringement if you work in someone's style - but it's also not professional to do this for too long, especially with a living artist. One piece in someone else's style is a homage, several years of pieces in their style starts to get you labeled a "clone" or a "style thief". Unless you are their assistant, in which case "working in their style" is exactly what you are being paid to do.

If you would like an example of this happening, read the 'controversy' section of Kieth Giffen's Wikipedia page, which sketches out the way fans and fellow artists took Kieth Giffen to task for doing a complete style swipe from José Muñoz' shadow-drenched crime comics for several years: https://en.wikipedia.org/wiki/Keith_Giffen#Controversy


Why are you dismissing their point because the technology could be used in a different way? It isn't being used as you hypothesized.


Silence and ableism are what we get from artists. We're still in the kneejerk reaction phase. "Having to do it yourself" is an awful precondition for art and expression.


The concept of AI art is fine. The implementation by scraping copyrighted work and claiming that their magic algorithm makes they immune to the consequences is where the complaints are.


With the current length of copyright, there is no possible way to build it otherwise. Saying that AI needs to only train on public-___domain works, is equivalent to saying that it shouldn't exist.

Of course, if it didn't exist, I wouldn't be buying art from artists instead. I'd simply go back to my stories not being illustrated at all. So from my perspective, this story is all about artists wanting to tear the ability out of my hands.


One could procure the necessary rights from the copyright owners / artists for all the training data. Sure, it would be quite costly and take a long time - but it is possible.


No, it isn't possible. Tracing copyrights for many things is literally impossible. Might be owned by defunct company X that died out several decades ago and had records destroyed in a fire.


One does not need every single copyrighted object in the world, just a large, diverse and representative set. One would have to skip the cases where rights cannot be secured. This is possible, see for example Spotify.


I mean piracy is a thing and if that is your jive then, cool?

You aren't exactly making a convincing argument saying that you don't have to pay an artist this way.

And honestly if you don't think this style of low effort content creation isn't coming for books...


Feels convincing to me...


Nonsense, the claim made by those that say the end of knowledge work is nigh is that an AGI is coming that is superior to human intelligence. Humans do not need to train and read / consume gigabytes and terabytes of past solutions to produce new work and they do it at a trivial fraction of the amount of watts.


From my perspective, our magic algorithm does indeed make us immune to the “consequences” of scraping copyrighted work. Legally, diffusion models learn from work rather than copying any specific piece, and copyright does not protect a style or give artists the right to retroactively restrict learning from open content. Ethically, our society accepts legally automating people out of jobs, and there’s no good reason that art is different - simply social prestige and classism. Current models, even fine-tuned ones on specific artists, are not wrong, however sad or harmful for the artist they may be. There are winners and losers, that’s capitalism, for better or worse.

I am sorry for artists and think we should probably have a subsidy for human creativity. But for me, Twitter’s bad-faith arguments about “stealing” or “copyright violation”, and maximalist demands for bans on AI art, make compromise undesirable and probably impossible.


> Legally, diffusion models learn from work rather than copying any specific piece, and copyright does not protect a style or give artists the right to retroactively restrict learning from open content.

Unless you are a lawyer you are just pulling reasoning out of thin air. All aspects of Copyright with human's have been decided by decades of court cases. Assuming you understand the underlying theory enough to guess how a court case would go is presumptuous.

> Ethically, our society accepts legally automating people out of jobs

And society accepts paying people for their work.

> There are winners and losers, that’s capitalism, for better or worse.

Capitalism is all about avoiding paying for what you did in order to artificially increase your profits, I agree there.

> I am sorry for artists and think we should probably have a subsidy for human creativity

I mean if the AI models just... Trained on purchased or free materials there wouldn't be a problem.

Other media figured out how to handle when a "purchase" isn't a purchase in the traditional fashion: see streaming services.

AI art is just the latest version of piracy. "It is just bytes" yet again.


You keep thinking ML models are copying your work but I think this comes from ignorance about how they work.

They're not copying, they're doing the equivalent of looking at thousands of paintings and from that figuring out what paintings are supposed to look like. If you were asked to draw something in the style of Picasso and did so, would your new art work be a copyright violation?

Is every painting you've ever seen public ___domain? Have you ever got an idea or influence from a non public ___domain artwork?


Here’s a thought experiment for you:

Is lossly encoding generative or copyright violation? Let’s say you encode a bunch of copyright images as lossy jpegs, specifically? Most people would say “copyright infringement“.

Now, if I have an encrypted zip file, random binary blob… but with a pass phrase, it decodes to a bunch of copyright images. Still copyright infringement right?

Now, if I have an AI compression algorithm where the model that rebuilds the content of the archive is a local binary blob, and all you transmit is the pass phrase, it’s still copyright infringement right?

Even though, you’re actually not sharing the content of the archive, you’re just sharing the pass phrase and the AI is rebuilding it from that pass phrase.

So, 1) when you rebuild a binary distinct output that sufficiently closely resembles the input, 2) regardless of the manner in which you are doing it, technically, it’s copyright infringement.

Now, this is where your argument fails.

…because your argument is that the technical means by which the content is “generated” inherently makes the content distinct and not copyright infringement.

However, “generation” and “decompression” are the same thing; the technical means by which the output is generated is irrelevant.

The difference is that in this case, the output is sufficiently distinct that you could reasonably argue that it is different from the training data. That’s a fair argument!

…buuuut, here’s the thing: if your model can generate outputs that are copyright infringement because they are similar but not binary identical to existing specific copyrighted works… then what you have “effectively” is a giant compressed archive that has both copyright and non-copyright work in it.

That’s problematic.

I am not misunderstanding how the models work; I’ve built models like this, and yes, there’s no “copy of the image” in the checkpoint file… just like an encrypted zip file doesn’t have a “copy of the image” in it until you apply the correct decompression and decryption algorithm, it’s just a blob with high entropy.

> Have you ever got an idea or influence from a non public ___domain artwork?

That is fundamentally not the issue here.

The issue is that the model checkpoint file contains a combination of data and algorithm that can rebuild both novel and copyrighted work.

If a zip file with both copyright and non copyright work in it is infringement, so is this.

Ie. tldr: yes, it’s complicated, but this “this comes from ignorance about how they work” BS that is floating around the SD community is both dishonest, and lacks itself an understanding of how the models work. It’s a pretty ironic thing to say really. The right prompt can generate copyrighted content. That how the models were trained. Not binary identical, but as we’ve already established, that’s not necessary condition for copyright infringement, or like every lossy image format would be free game.

…and yes, I get it, if someone invented a purely algorithmic decompression algorithm that could take “source code for MS Word” and generate exactly that as the output, then I’m arguing it would be infringing. Yup. …because the only way you could create that algorithm would be to encode the original source code into it.


But you can't rebuild the original image unless you're so incredibly specific that those same instructions given to a random artist would make the original image. And even then...

So your argument rests on a premise that, as far as I can tell, isn't true. What prompts did you give to what ML model that gave you something copyrighted?

Because I just tried a whole bunch of different prompts to make the bloody Mona Lisa with no luck.


> But you can't rebuild the original image

You can.

Diffusion models do this:

- prompt -> vae -> latent -> diffusion -> latent -> vae -> image

Or, for image-to-image:

- image -> vae -> latent

- prompt -> vae -> latent -> mix image latent + diffusion -> latent -> vae -> image

Try image-to-image on the source image with a low strength. Can it regenerate the image? If the answer is yes, then there is categorically some latent that maps to the output. i.e. It is technically possible to generate the image from the model.

The question is, how would you generate that latent from a prompt?

You do it like this:

1) Pick an image you want to find, eg. https://www.artstation.com/artwork/Zl6Zx

2) Do a reverse image search to find a suitable prompt, from: https://huggingface.co/spaces/pharma/CLIP-Interrogator

In this case, it's:

> a painting of a woman with blue eyes, by Aleksi Briclot, trending on Artstation, ghostly necromancer, red haired young woman, downward gaze, the blacksmits’ daughter, her gaze is downcast, dressed in a medieval lacy, gothic influence, screenshot from the game, from netflix's arcane, high priestess

3) Search the laion database for that prompt and see if it was part of the training data: https://rom1504.github.io/clip-retrieval

(Yes, it is: in this case, the top hit is a match score of 0.3968).

4) Crack open stable diffusion.

Put the cfg scale up (do not allow variation) and pick a some step value that generates reasonably accurate images. Maybe like k_euler, 50 steps, cfg scale 30, 512x768.

You're now generating images that are 'nearby' in the latent to the target; now its just pissing around in the seed and with variations to narrow the gap.

> So your argument rests on a premise that, as far as I can tell, isn't true.

...but the point I'm making is that it is a) possible, and b) plausible, if you can be bothered doing a seed search. Can you be bothered? I can't be bothered.

Like... I mean, dude, the model was trained to be able to do this. That's literally what it's supposed to do. The VAE can map a latent to real existing images trivially (that's what it literally does when you use image-to-image). The latent space is a 64x64x4 vector that you're moving through with your prompt in 'latent space'.

Look, I get it, the chances are you picking the exact seed that generates this exact image are pretty slim right? 64x64x4 is a massive fucking number, and the chances of stumbling on exactly the right seed is like winning the lottery right?

...but that would only be true if you were RANDOMLY moving through the latent space, and you're not. You're specifically homing in on 'good' latent space values around real images using your prompt. That means that the chance of the latent you pick being a real picture is not 1/64x64x4... it's actually plausibly higher.

There is a non-zero chance that any generated output is actually a real image.

So, back so our original quesiton:

Is sharing a magic seed (123123123 or whatever) the same as sharing a password to an encrypted zip file?

...because that's what it comes down to.

You have:

- (algorithm) that takes (pass phrase) and generates (output).

- If you can provide (pass phrase) and generate (output) that is copyrighted content, then is (algorithm) infringing?

- The answer applies the same way to encrypted zip files and to AI models.

So... you gotta pick which way you wanna roll with it, but you can't have one or the other. You get both, with the same rules.

That's the problem.


Ableism? Isn’t that a bit much? That this taking the victimization nonsense to a level even I didn’t think was fathomable.


I'm with you. "Oh no look at that ableist piece of shit Andres Segovia with his ten fingers" give me a break. Chuck Close painted his giant portraits with what, one finger?

sorry if anyone is disabled and i hope they're able to get the support they need, but equating DIY with ableism only makes sense if your disability is "lazy and kind of a moron." and to be fair, i know plenty of people that suffer from this chronic condition, their art is pretty unnecessary, and they're often trying to ally themselves with whichever new protected class in order to garner kudos in ways other than being remarkable on their own merits.

i love this dumb premise, though. i'm going to continue thinking about it. damn, "thinking," how ableist of me... there are people in comas who can't think!


>ableism

unironically skill issue

https://www.youtube.com/watch?v=oDLoleIfAro

not to mention the blatant disregard for the many disabled people who rely on art as our only means to make a living. i've seen far too many singularity nutcases saying artists are regressive luddites who should be crushed under the noble wheel of progress from once side of their mouth while crying about gatekeeping and '''ableism''' from the other.


I was divided on this issue, but now, I agree. The noble wheel of progress will continue to turn. They only have the choice to get out of the way or be crushed under it


>Would your opinion change if these models were trained exclusively on works in the public ___domain?

this would be the ideal


I’m a professional writer that is paid for my copyrighted work. I welcome any tools that can help me write better.

I think there are many sides to this issue and artists shouldn’t care keep their industry.


"Artists shouldn't care keep their industry" - so how do you feel about the SFWA's campaign to make Disney uphold the contracts for tie-in novels they bought along with Fox? https://www.sfwa.org/2020/11/18/disney-must-pay/


Do you hate artists looking at your art and picking up ideas from it too? Is the every art movement inevitably copyright infringement?


Every human is trained on previous “hard work”. There’s no difference and your Luddite behavior is going to go… well the way of the Luddites.


Yet another triumph for the bosses of the world; in the absence of unions, automation only profits the owners of the machine, and none of the people who are now performing much less skilled labor gets a better life.

In this case, it's a labor that's already a way to barely scrape by for a lot of its practitioners.

This is the difference. Is this what you are cheering for? Is this the world you want?


The world I want is one I can quickly generate and tweek art for my hobby game without it taking weeks, a thousand dollars and contract negotiation.


The problem in scenario that you describe is the bosses, not the automation. How about we solve that problem, instead of AI witch hunts?


You’re relying on multiple meanings of the word “train”. Humans produce new knowledge, art, solutions, etc, without consuming the full body of human created media. It’s hardly even reminiscent of the same things. Humans produce solutions, at times, without ever being trained in a fraction of the vast body of human knowledge. This is not the same as a generative AI.


I mean I have seen tons of people complaining about the use of those models on social media. I don't think that people are only objecting to this project.

I don't agree with them, but I don't think its fair to say there hasn't been pushback on the other projects you mentioned.


Stable Diffusion removed Greg Rutkowski et al from the training data per their request, Unstable Diffusion is a project to fine-tune them back in.


Greg Rutkowski was simply very present in the unknown dataset used by OpenAI for training CLIP-L/14, whereas when LAION trained the CLIP-H model it did so on the LAION dataset that seems to have far fewer references to this artist.


Unstable diffusion started collecting donation via their website https://equilibriumai.com/


I and several other people I know have actually doubled our original pledges that we made on Kickstarter just as a giant figurative middle finger to the censorship clowns.


Thanks, donated!


Unstable Diffusion will find another way to fund itself. From the article, looks like they may have already found a viable option.

Artists will decry AI art for a few more months / years, then dust will settle, and we will still have ever-improving diffusion (or whatever the next iteration is) models. Slowly, people will realize that artists learn and copy from other artists much in the same way that current ML models do. Education about how these models work will improve, and the current mythology of "it's blatantly copy/pasting!" will subside, for the most part.

We'll start to see a counter-protest against anti-AI artists who use tools like Photoshop or other digital tools, most of which have involved some amount of machine learning (e.g. the "heal brush" or "context aware fill") for a long time.

AI art will start passing the "art turing test" at higher rates, and people will realize that whatever intrinsic spark / soul / whatever there is to art is mainly attributed to a narrative or story about the artist that exists outside of the canvas - and that the actual artistic contents of what lives on the canvas itself doesn't depend on a living, breathing thing if you make up a compelling narrative to support it.

Artists displaced by this new ease of generating art will find new economic niches, and life will continue on.


Porn is clearly missing in the HN title to focus at this specific target. Porn is one of the obvious uses of these techniques. Like it or not. Porn is not illegal and having this discussion in 2022 seems "puritanism" is alive.

In the context of HN, it would be interesting to see how unstable diffusion will change the porn industry. I imagine anyone could contribute to the industry, if they want, modifying their identities. I think it was Axel Kuschevatzky [1] who said that the porn movie industry is the most conservative. Playing the same short scripts all the time.

[1] https://www.imdb.com/name/nm1098024/


The people behind unstable diffusion have created a dedicated means to donate to the cause:

https://equilibriumai.com/index.html

It's already up to the original amount that they intended to raise on Kickstarter. I have a feeling that all this negativity is just going to amplify the Streisand effect.


Here’s the thing though:

Kickstarter can now safely say, “our brand is not associated with this porn generator”.

…and, in the end the unstable folk got the money they wanted anyway?

Isn't this a victory for everyone?

Seems like a fuss about nothing to me.

Freedom of association; don’t like what they’re doing? Don’t be involved with it.


I assume that the artists style is essentially just mapping from their name into some latency space of styles, more or less, not necessarily that the model has memorized every individual's style.

If that's the case, it would be interesting to see what happens if the AI purposefully isn't trained on art in some part of the style latent space, and to see if it's able to still generate art in that style.

Taking this to a possible extreme, what if we trained it only on classical art work, would it be possible to get it to extrapolate to more modern styles, or to fine tune it with only a small number of, potentially, as people are calling for, paid samples?


I really hope that Kickstarter is not another vendor-locked "we won't give you your backers' contacts" enterprise, and they just relaunch the campaign on another platform. Having competition in FOSS sector is the only way for it to thrive.


There are other crowdfunding platforms that they can use. Kickstarter glory days are long past.


A beautiful illustration of the streisand effect I see. I had no idea this was a thing.


This is good for Bitcoin


People weaponizing “non consensual porn” to attack this are going to also need to ban pencils, keyboards, and police thoughts. good luck!

and merry christmas


That’s like saying there’s no difference between knives and nukes, because both kill people.

The ease of it matters.


Ease, scale, and physical disconnect are all factors.


I thought that argument was a bit weird. Did the Unstable Diffusion folks specifically add revenge porn to their training model? If so I could see why people were upset, but I didn't see any evidence of that happening.


deepfakes and pencil sketched nudes are thoroughly different in multiple ways


Not if your whole worldview rests on reductive arguments.

It’s ok to look in shop windows, therefore it’s ok to look in any window, therefore it’s ok to use ultra telephoto lenses to photograph naked people in their homes.

It’s hypocritical to ban fully automatic machine guns if you don’t also ban pillows, because both can be used to kill people.

And so on. There is a whole personality type that seems to be based on holding views that depend on ignoring all context in order to produce absurd results while claiming to be highly principled.


I often use this as an approach to debating. Example;

Can we both agree that you should be able to purchase a kitchen knife without a special license? (in parts of China, you can't). Do we both agree that allowing people to have personal nuclear weapons is probably a bad idea and shouldn't be legal?

If we can agree on those two things, we're actually on the same side of the argument. The only difference is where we draw the line. I might draw it at "you can buy a sword but not a gun" - you might draw it at "you can buy a gun but not a bazooka". But we both realise that some things can be given to people and handled responsibly (like a simple knife) but other things can't (like large amounts of explosives)

Occasionally you'll meet a person that says "no I think people should be allowed to have nukes" or "no, we should ban all sharp objects" in which case you can dismiss a whole like of reasoning, i.e. arguing where to draw the line, because the person's made it clear there is no bounded limit, and they are not open to debating that.

Other examples of scales where, if you can't establish boundaries, there is no point debating that person;

- can we forgive a child for accidentally killing a small animal? If so, can we forgive Hitler for genocide?

- can I draw a caricature of a famous person? If so, can I make deepfake porn featuring my neighbour?

- can I push a burglar out of my house when he breaks in? If so, can I execute anyone who enters my property?


Yep. It’s not worth engaging with someone who insists the objective principles of the universe support their exact beliefs and boundaries.

I’m also reminded of the old “would you sleep with me for $1m” joke: https://quoteinvestigator.com/2012/03/07/haggling/


I'm with you. All of society has always rested on gradients and fuzzy lines. "highly principled" extremism, while nice in theory, has never made the world much better. Pragmatism is what has driven most advancement. That may not feel great to some people because it's rarely perfectly fair (among other things), but it doesn't change it's success.


The legal system is exactly the opposite in establishing limits: canaries in the coal mine and the exceptions proving the rule.

Not really anybody wants to be "First Amendment auditors" or the guy in Warren v Brandenburg, but they forced the issue to establish/discover limits.

If what was legal was simply "what is average/normal/reasonable" then that likely would mean much less freedom.

You can also be influenced socially through media (etc) either direction over time to change what the definition of "reasonable" is. Remember that Barack Obama ran on the now "extreme" position (that many non-Western countries like Japan still holds) that marriage is between a man and a woman only, a little more than a decade ago. Conversely, gay marriage was "extreme"/fringe back then.


I think examples like the grandparents are important for exploring why exactly we come to different conclusions.

For the porn example: Is it because one is often thought to be completely accurate, causing a distorted/wrong view of the world? Is it speed of creation? Is it how easily people can learn to do it? Is it automatability? Is it fine detail and accuracy?


Fully automatic machine guns are quite banned.

Using a telephoto to peer into someones window is also illegal as they typically have an expectation of privacy.

Photoshopping a movie stars face onto a nude body has been around a long time and much of the fake porn of famous people was done this way, probably mostly by individuals. Please give it a google if you doubt me, at your own risk.


> Using a telephoto to peer into someones window is also illegal as they typically have an expectation of privacy.

In the US, in NY at least, they do not. A case about exactly this (telephoto lens photography through someone else's window, later displayed as art) didn't survive a motion to dismiss. This was upheld on appeal. Foster v Svenson (2015).


> Fully automatic machine guns are quite banned.

That very much depends on where you are. In US, they're not banned in most states, although they're quite expensive to get, and there's a lot more paperwork than usual.


Photoshop is a better example. Photoshop is better than any AI software for this.


Not to mention Photoshop. The problem is I honestly think if Photoshop came out for the first time in 2022 that it absolutely would get banned and be considered far too dangerous.


Related to this, there is a gofundme campaign going on which promises to hire lobbyist against these technologies in Washington.

https://www.gofundme.com/f/protecting-artists-from-ai-techno...


Probably better to just give that money straight to artists by commissioning art




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: