How stable diffusion is disrupting artists’ livelihoods, and why “AI Creatives” won’t stop ripping off their favourite artwork.
Let me start by saying I think stable diffusion, and similar models, are extremely impressive state-of-the-art. As we’ll conclude later, these models are not inherently so terrible that our only option is to outlaw the technology, we simply need to clearly define how copyrighted material (such as original art) should or should not be used in the creation of models, or if the companies profiting from these models are accountable for breaches of copyright (or loss of livelihood) when their users disingenuously imitate known artists.
In this article, we will touch on the foundation of generative models, see the advances made in customisation and fitting, look at how malicious actors can target artists’ livelihoods, and consider why this problem is unlikely to occur in music generative models.
Alice in wonderland, in the style of victorian cartoons, generated by DALLE 2
When using a generative art tool such as DALLE 2, you can’t help but be amazed by the advances of deep learning in recent years. Generative models are multi-billion parameter neural networks, which work by learning the relations between images and language, and use encoders to map both to a shared space, enabling translation between the two. A rough visualisation of this process is to picture the model as a method for translating between two languages, where words in both languages are arranged in a shared multi-dimensional space in such a way that similar words share some spatial relationship. Models such as DALLE 2 follow this concept, except one of the languages is image based.
Dreambooth extends this concept by enabling customisation/fitting of the model from a generalised text-to-image model to a more specific model that has a bias towards a certain art style, etc.
Alice in wonderland, in the style of victorian cartoons, falling down a rabbit hole onto a pile of computers, generated by DALLE 2
Whilst Dreambooth was not released to the public by Google, with its authors noting how “malicious parties might try to use such images to mislead viewers”, imitations quickly arose to enable users to fit generalised models to specific art styles.
Popular online artists found their portfolios being used against them, to generate cheap custom art - cutting the artist out of the commission process entirely. Many artists found this offensive or an attack on their livelihoods, with an eye not just towards current generative models but the rapidly advancing field promising more realistic forgeries in the near future.
The online art and AI communities are fairly split as to how this legal battle should be resolved, with the line being drawn exactly as you might expect. Whilst deliberate attempts to attack an artist are frowned upon, such as a “competition” to best imitate SamDoesArt (which upon closer inspection mostly seems to be encouraging the adoption of a certain AI model-sharing platform), there still exists plenty of conversation supporting the imitation of artists, with some believing they’re doing the artist a favour by allowing them to “increase their output without having to draw anything”.
“An artist like this could easily train a perfect model and increase his output without actually having to draw anything” yeah that’s why you guys will never be artists, you werent blessed with the love of drawing & working hard, watching your piece slowly taking form in ur hands pic.twitter.com/JUgivuQZm5— Pearl (@pearl_soe) December 10, 2022
Further to this, AI platforms attempting to draw the line for their users (or perhaps pre-emptively avoid lawsuits), face harsh criticisms from community members, with some quite baffling “moral” arguments.
CivitAI, a platform for custom Stable Diffusion models, announced they’ll start accepting takedown requests from artists if/when a model is tuned to imitate them specifically. This was generally met with displeasure and even outrage by AI users. Here’s an open letter one posted: pic.twitter.com/E0zs8dggHi— Katria 🐷 (@katriadoodles) December 29, 2022
The crux of the issue lies within whether the use of copyrighted material to train a generative model (to generate more art in that style) violates the intellectual property of the artist. Current legal frameworks don’t exist to handle this issue, and courts aren’t famous for siding with artists even in cases of copyright violations by real humans:
Are artists sacrifices on the path to progress? Is there another way to protect the art community whilst advancing generative models? How companies such as OpenAI are approaching generative music is a case study on why decisions to use copyrighted material inside generative models is a choice and not a necessity.
It’s well known that the music industry, particularly record labels, can be trigger-happy when it comes to protecting its intellectual property. Millions can be paid in royalties when an artist’s work is inappropriately sampled outside the loosely defined “fair use” policies of different countries. But in this way, the music industry has protected itself against the negative effects of generative modelling in a way the art community has not.
What is interesting is how this appears to influence AI developers such as OpenAI. Their Musenet model, using the same technology appearing in models such as DALLE, was trained specifically on classical or public-domain datasets, with the subtext being that the music world is far more prepared to defend against imitation that might hurt music artists’ (or their associated corporation’s) livelihoods.
While this decision is likely rooted in a desire to avoid copyright violations, as opposed to any respect for the work of the original artist, it’s a courtesy that hasn’t been extended to artists, and likely won’t until any artist can take a case to court, or else persuade legislators to clarify this use of copyrighted material, either of which will likely take years (unless someone decides to draw the attention of a giant such as Disney).
To conclude, despite generative models existing in the wild for a relatively long time now (by pop culture standards), the issues surrounding them continue to pervade the lives of artists, with companies turning to AI as a way to save costs and to jump on the newest trend (whilst remaining tone death to their audience/workforce). Despite these missteps, however, some companies are managing to listen to the creative communities and are taking strong stances against generative models.
Tor...really? "Production constraints"? The book's not out until MAY. You just don't want to bother with doing a new cover because you're eager to get those sales ASAP. But this blatant AI art on a book cover releasing from a major publisher sets a dangerous precedent - https://t.co/23C9DapqKc pic.twitter.com/wwKoJrWs9g— Xiran🌻Tired & Busy (@XiranJayZhao) December 15, 2022
Powerful statement from @Kickstarter, who removed an AI image generator project from their site just now. pic.twitter.com/n9xUw8SmGA— Loish (@loishh) December 21, 2022
I’d like to see further efforts by the AI community to develop their models following stronger ethical guidelines, considering the impacts of this work on existing human talent, and striving to work not just within the boundaries of a legal system ill-disposed regulating disruptive technologies, but to how future legislation should be written, lest they fly too close the sun and find themselves restricted by heavy-handed lawsuits that shut down the technology entirely.
Zachary Smith is an MSc Artificial Intelligence graduate, working in machine learning and writing about new technology and ethical AI.