A flat above a fried chicken shop in Notting Hill is an odd place to be at the heart of what has been called “one of the most important legal questions” of the 21st century. It is the registered office of Stability AI, an artificial intelligence group that is upsetting artists around the world.
Stability AI is run by Emad Mostaque, a computer scientist and former hedge fund employee. It operates the image-generating software Stable Diffusion, described in a US lawsuit as “a 21st-century collage tool that remixes the copyrighted works of millions”. Type in “Elon Musk in a Van Gogh painting” and it produces an amusing pastiche.
The three women artists behind the US lawsuit have backing. Getty Images, the stock photo group with 135mn copyrighted images on its database, last week started another legal action against Stability AI in the UK courts. Getty’s images, along with millions of others, are used to train Stable Diffusion so it can perform its tricks.
The generative AI revolution has erupted fast: Stable Diffusion was only released in August, and promises to “empower billions of people to create stunning art within seconds”. Microsoft has made a multibillion-dollar investment in OpenAI, which last year unveiled the text-to-image generator Dall-E, and runs ChatGPT.
Visual art is not the only discipline in which AI agents threaten havoc. The music industry is quaking at the prospect of millions of songs (and billions in intellectual property) being pored over by AI to produce new tracks. Tencent Music, the Chinese entertainment group, has released more than 1,000 tracks with synthetic voices.
In theory, algorithmic art is no more able to escape copyright and other IP laws than humans: if an AI tool produces a song or an image that does not transform the works on which it draws enough to be original, the artists who have been exploited can sue Using a black box to disguise what has been dubbed “music laundering” is not a convincing legal strategy.
Nor is an AI agent learning from a database entirely different from what humans have always done. They listen to songs by rival bands, and study other artists to learn from their techniques. Although courts are full of disputes over whether songwriters have copied illegally, no one tells them to block their ears, or warns painters to close their eyes at exhibitions.
But scale makes all the difference, as the music industry knows very well. It was safe enough in the pre-digital era, when music was sold on vinyl and CDs and sometimes copied on tapes by fans. When Napster enabled mass downloading and distribution of digital tracks, the industry faced deep trouble before it was rescued by Spotify and licensed streaming.
AI tools not only crunch databases but manufacture images to order: why stop at Van Gogh when you can get Musk by Monet, Gauguin or Warhol just by typing in prompts? It is not high art but Estelle Derclaye, a professor of intellectual property law at Nottingham university, observes that “if AI starts to replace human creativity, we have a problem”.
Humans retain plenty of advantages: a synthetic version of Harry Styles called by another name would not be a fraction as popular as a performer, even if the owner of the AI tool got away with it. But there are other uses — background music in video games, for example — for which a synthetic band sounding like BTS might be good enough.
Trying to halt AI artistry would be impossible, as well as undesirable. But the legal framework needs to be set to prevent human creativity from becoming financially overwhelmed. The issue raised by Getty is whether companies such as Stability AI should be able to train their AI tools on vast amounts of copyright material without asking permission or paying license fees.
This is legal for research in many countries, and the UK government has proposed extending that to commercial use. There have been similar calls in the US for AI models to gain the right to “fair learning” on such data because it would be impossible to track down all the license holders of gigabytes of stuff scraped from the web, seek approval and reward them.
That strikes me as too blasé, and similar to arguments in the days of illegal downloading that the digital horse had bolted, and everyone had to get used to it. Stability AI has been valued at $1bn and Microsoft’s investment in OpenAI shows that there is money around; what is needed is a mechanism to distribute more among creators.
Individuals need further protections: it is one thing to train AI software on a mass of material, but what if someone feeds in the works of a single living artist and then asks for a new sketch in her style? One Los Angeles illustrator was subjected to such AI “fine-tuning” by a Stable Diffusion user recently; it is not clear whether a court would call that fair use, but I don’t.
“Please know that we take these matters seriously,” Stability AI promised last week. Was its statement drafted by a human or by an AI tool? These days, it is so difficult to tell.