There's actually been some litigation around this topic recently. The general consensus seems to be that authorship gets assigned to closest version of the person who initiated the creation process of the resulting artwork. So, in this case, I believe it would be the person that selected and uploaded the photo to the service would have authorship--the actual process is considered to be more of a blackbox tool being used by that person. A (way more) sophisticated digital paintbrush, if you will.
Wouldn't this be similar to using Photoshop or any other image manipulation tool? Adobe does not get ownership of output of their software. Why would we think some random website offering image manipulation would be different? As you say, the software is just a tool in the image creation process.
what gets really messy is the training data. Renaissance paintings are (hopefully?) all public ___domain, but what would be the case if I used living artists' work, or the collection of disney cartoons?
The pictures themselves are probably public ___domain, the photography of the picture however might be protected (photographer picked angle, lense, lightning, ...)
I'm not sure about links directly to court cases, but here's a couple different general-consumption articles from the last few years that address this sort of thing. I'm hopeful I didn't convey that the consensus on the subject is particularly solid...
Thanks for the links. I know little on the topic. It may be a while before a litigious content owner identifies their work as having contributed to another generated one. I have yet to see in-your-face examples being monetized.
From a technical standpoint using copyrighted text to train a text translator is similar to using copyrighted movies to train a movie generator. Which of these are acceptable?
Combine it with https://www.thispersondoesnotexist.com/ and you've got a generator of completely fictional renaissance paintings. Here is one example, quite good:
I wonder about that? In most cases once an algorithm is trained, running inference is just a function evaluation, which is usually computationally inexpensive.
This is mentioned in their site. Seems they've identified it.
>Currently, we are confirming that the output of the AI artist has been biased.
We hope to use a wide variety of learning data and increase the diversity of output in the future.
It seems like the authors are truly limited by the data here. I mean, if someone was to do a similar project of Chinese Qing dynasty portraits, you would expect a bias to Asian faces.
We have to be so conscious of bias in AI, yet in this case I wonder what the solution (if there is one) would be, given you genuinely have a biased data set to begin with.
That's almost for the best. The representation of non-European ethnic facial features / skin tones in renaissance art appeared to mostly be one of two groupings:
1) Accurate, naturalistic portrayals that almost certainly had an actual human sitting and;
Maybe this should be titled "nightmare generator" because most of the pictures I have tried, especially of my wife, have ended up with very frightening distortions. Might be cooler to pair this with cubist paintings instead, so the facial defects seem more like features.
As a painter, I’m genuinely impressed with the results. However, the system is not capable of taking much initiative. It will reject anything other than a full-face portrait. Of all genres, portraiture is the most convention-driven. For example, 90% of most portraits are three-quarter view, with side lighting. Effectively, there are only a very limited set of solutions to the problem. Applying the same approach to a landscape painting would be a different class of problem altogether.
Ah title is misspelled, page says “Gahaku”, which is a slightly archaic term in Japanese for artist/Painter with a connotation of being a highly skilled and respected master of the craft.
There’s also a sarcastic net slag meaning to the term as well but the creators are probably not using the term with that intent.
Yeah, the internet trend of turning everything into hyperbole squared is obnoxious. These look like blurry insane asylum art projects. I don't think any reasonable person is going to look at this stuff and call it a masterpiece.
As a representational painter, I've been waiting for someone to do a write-up on how/why these 'AI' paintings aren't going to be equivalent to human paintings until 'AI' is itself equivalent to humans.
Unfortunately, I haven't seen one yet. Maybe I'll have to do it myself.
IMO most 'generative art' is about as creative as a markov chain text, just with images as the input instead of words. It'll be a while before anyone can assert any sort of equivalence to human creativity and have it taken seriously
Great work! Although it is interesting to me how it doesn't work very well for non-white faces. Which totally makes sense given that renaissance art training data reflects the time and place in which it was created.
From someone that is not an ML/AI evangelist, how is this great work? None of the images I've seen remotely look good. I have yet to see something that a real life painter not suffering from a stroke would be willing to have released.
I wear glasses. I've tried with many photos of me with glasses and it always looks like it can't handle it. Do you think it is because it was trained with paintings at a time when glasses weren't common ?
Not really though. That google demo looks like a single-pass feed-forward neural network trained to perform style transfer. The textures and colors get replaced while overall content is stayed the same.
This project seems to work by finding your image in the latent space of a GAN model, and then re-synthesizing a new image from that vector.
It's more like generating a whole new image which is targeting the overall look of an existing image, while jointly having optimized the generated image to look like it comes from a set of renaissance art.
Edit: on second thought, this tool might be running too quickly to be doing optimization to find an image in latent space. It might just be fancy vanilla style transfer done nicely. Hard to tell.
I mean I’m aware there probably weren’t many black people during the renaissance. But it would be even cooler if this worked for people with different shades?
This is the sort of great work that invites the question of whether "can" == "should".
It's awesome that the work of great artists can be reduced to an algorithm. That effort is its own kind of art, and will see application (e.g. restoration work) far and wide. It could help with tutorials for students to get into these older styles.
Despite all of the intermediate goodness, I still want the no-kidding product I buy to have had some actual human imperfection and idiosyncrasy injected.
I think you give these projects too much credit. They do not reduce the work of great artists to an algorithm and they do not turn user's photos into renaissance paintings. These projects are technically impressive and interesting, but art is not going anywhere.
https://imgur.com/a/hkG0rJg
https://imgur.com/a/YZ3oWUm