Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.
Stable Diffusion also was from its inception in the hands of tech bros, funded and built with the help of a $3 billion dollar AI company (Runway AI), and itself owned by Stability AI, a made for profit company presently valued at $1 billion and now has James Cameron on its board. The students who worked on a prior model (Latent Diffusion) were hired for the Stable Diffusion project, that is all.
If they’re taking CC BY-SA and failing to attribute it, then they are also breaking licencing and abusing content for their profit. An VLM could easily add attributes to images to assign source data used in the output - weird none of them want to.
In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.
Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.
Getting sued means Getty images disagrees that the use of the images was legal, not that it was secret, nor that it was moral. Getty images are included in the LAION-5b dataset that Stability AI publicly stated they used to create Stable Diffusion. So it’s not “intentionally obscuring” as you claimed.
Copying is not theft, no matter how many words you want to write about it. You can steal a painting by taking it off the wall. You can’t steal a JPG by right-clicking it and selecting “Copy Image”. That’s fundamentally different.
An VLM could easily add attributes to images to assign source data used in the output
Oh yeah? Easily? What attribution should a model trained purely on LAION-5b add to an output image if prompted with “photograph of a cat”?
In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.
You can do whatever you want (within usual rules) in your personal life, but you chose to enter into a discussion.
From that discussion it’s clear that your position is rooted in bias not knowledge. That’s why you can’t point out substantial differences between AI-generated images and other techniques which re-use existing imagery, why you make up intentions and can’t back them up, and why you prefer to dismiss academics as “tech bros” instead of engaging on facts.
Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.
Stable Diffusion also was from its inception in the hands of tech bros, funded and built with the help of a $3 billion dollar AI company (Runway AI), and itself owned by Stability AI, a made for profit company presently valued at $1 billion and now has James Cameron on its board. The students who worked on a prior model (Latent Diffusion) were hired for the Stable Diffusion project, that is all.
I don’t care to drag the discussion into your opinion of whether artists have any ownership of their art the second after they post it on the internet - for me it’s good enough that artists themselves assign licences for their work (CC, CC BY-SA, ©, etc) - and if a billion dollar company is taking their work without permission (as in the © example) to profit off it - that’s stealing according to the artists intent by their own statement.
If they’re taking CC BY-SA and failing to attribute it, then they are also breaking licencing and abusing content for their profit. An VLM could easily add attributes to images to assign source data used in the output - weird none of them want to.
In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.
Getting sued means Getty images disagrees that the use of the images was legal, not that it was secret, nor that it was moral. Getty images are included in the LAION-5b dataset that Stability AI publicly stated they used to create Stable Diffusion. So it’s not “intentionally obscuring” as you claimed.
Copying is not theft, no matter how many words you want to write about it. You can steal a painting by taking it off the wall. You can’t steal a JPG by right-clicking it and selecting “Copy Image”. That’s fundamentally different.
Oh yeah? Easily? What attribution should a model trained purely on LAION-5b add to an output image if prompted with “photograph of a cat”?
You can do whatever you want (within usual rules) in your personal life, but you chose to enter into a discussion.
From that discussion it’s clear that your position is rooted in bias not knowledge. That’s why you can’t point out substantial differences between AI-generated images and other techniques which re-use existing imagery, why you make up intentions and can’t back them up, and why you prefer to dismiss academics as “tech bros” instead of engaging on facts.