A Legal Look at Artificial Intelligence and Copyright

artificial intelligence and copyright law

M Music “as we know it” has been prematurely pronounced dead several times over. The cassette tape, MIDI digital synthesizers, Napster, Auto-Tune and streaming were all received with apocalyptic hysteria. The current existential threat is artificial intelligence (AI), a software leviathan with a voracious appetite for copyrighted works, and a prolific capacity for human-free creative processes. Whether AI will kill the humanity of music remains debatable. What is not up for debate is that AI raises many legal issues. While courts have yet to weigh in, the U.S. Copyright Office has issued instructive decisions and made AI-related copyright issues a 2023 priority.

The proliferation of AI in music

AI in music is not new. Alan Turing, the godfather of computer science, created a simple melody-making machine in 1951. Experimental trombonist and composer George Lewis improvised a live quartet with three Apple II computers in 1984. David Bowie experimented with a digital lyric randomizer in the 90s. Hello, World, the first AI composed pop album, was released in 2018.

Today’s AI is more evolved and exponentially more impactful. Indirect enhancements (personalized playlists, music recommendations, etc.) have given way to direct creation tools. For example, Google’s Magenta wrote a new “Nirvana” song by analyzing the melody, chord changes, guitar riffs and lyrics of the band’s past works. ChatGPT receives text instructions to compose lyrics superior to those that IBM Watson wrote for Alex da Kid in 2016. Authentic Artists leases AI-powered artists-for-hire. MUSICinYOU.ai generates tailored compositions from a 300-question personality test. Bandlab’s Songstarter is an “AI-powered idea generator” capable of creating royalty-free music in seconds. Startup Staccato pitches itself as “an AI Lennon to your McCartney” given its ability to bounce ideas off human songwriters.

Only “sufficient human creative input” supports copyright ownership

The Copyright Act protects “works of authorship” – a concept derived from the U.S. Constitution’s Copyright Clause, which empowers Congress to secure “exclusive rights” for “authors.” Courts have held that authors must be human. Consequently, animals (including the famed monkey selfie) and natural forces (a naturally growing garden) cannot be authors of copyrighted works.

While current legal precedent suggests that AI also cannot “author” copyrighted works, the critical issue is what amount of human creative input or intervention suffices to make AI-generated musical works copyrightable (and by whom)?

U.S. courts have yet to answer this question decisively. The Copyright Office has drawn some basic boundary lines. AI-advocate Steven Thaler filed a copyright application for AI-generated artwork. The Board rejected his applications three times, finding that the artwork was not “created with contribution from a human author” and thus failed to meet the human authorship requirement. (Thaler has since sued.)

Conversely, copyright protection was afforded to David Cope’s 1997 work Classical Music Composed by Computer (and, again, to his 2010 album From Darkness, Light). Cope successfully demonstrated that his works only partially used AI and were the result of sufficient human creative input and intervention. More recently, the Copyright Office granted a first-of-its-kind copyright to a comic book created with the assistance of text-to-image AI Midjourney (though the Copyright Office is now reconsidering its decision).

In the absence of bright line rules for ascertaining how much input or intervention by an AI’s user is needed, each work must be individually evaluated. It is a question of degree. Under traditional principles, the more human involvement, and the more AI is used as a tool (and not as the creator), the stronger the case for copyright protection. A song created with the prompt: “create a song that sounds like The Weekend” will not suffice. But a copyright application which both: (i) demonstrates that a human controlled the AI and (ii) memorializes the specific human input in the creative process is more likely to succeed.

A word of caution: the Copyright Office has made clear that misrepresenting the use of AI in the music generation process is fraudulent. And although the Copyright Office solely relies on facts stated in applications, both it and future litigants are likely to soon deploy AI-detecting software to verify the extent to which AI was used to generate the musical work.

AI “training” looms as the first major battle ground

Generative AI software (like Magenta) is “trained” by feeding it vast quantities of content – text, lyrics, code, audio, written compositions – and then programming it to use that source material to generate new material. In October 2022, the RIAA shot a warning flare by declaring that AI-based extractors and mixers were infringing its members’ rights by using their music to train their AI models. Those that side with the RIAA argue that AI’s mindboggling ingestion of copyrighted music violates the Copyright Act’s exclusive rights to reproduce and create “derivative works” based upon one or more preexisting works. Because generative AI produces output “based upon” preexisting works (input), copyright owners insist that a license is needed.

On the other hand, AI-advocates argue that the use of such data for training falls within copyright law’s “fair use” exception, claiming that the resulting work is transformative, does not create substantially similar works, and has no material impact on the original work’s market. They contend that the training data has been sufficiently transformed by the AI process to yield musical works beyond the copyright protection of the original works.

These competing views are likely to be tested in the class action lawsuit just filed on behalf of a group of artists against Stability AI, DeviantArt, and Midjourney for allegedly infringing “billions of copyrighted images” in creating AI art. (Getty Images recently filed a comparable lawsuit against Stability AI in the U.K.).

Proving infringement with AI-works

How exactly the AI was trained and operates will be issues in copyright infringement litigation. Proving infringement is a two-step process. The plaintiff must demonstrate that copying occurred; and that the copying is unlawful, because the defendant copied too much of the plaintiff’s protected expression and is, therefore, substantially similar.

The first of these inquiries can be proven by direct evidence of copying or circumstantially by establishing access to a specific, allegedly infringed musical work. With art, there is a Spawning AI software called “Have I Been Trained” which allows users to search through the images used to train AI art generators. While no known current analog exists for music, the technology is likely imminent.

The nature of the AI instructions will also be crucial to showing an awareness of the original work and substantial similarity between the AI-generated music and the allegedly infringed music. Prompts that intentionally draw on copyrighted works (i.e., create a work in “the style of _”) undoubtedly bear on the issue of substantial similarity. The marketplace is pivoting in advance of anticipated rulings: Songmastr has, for example, stopped marketing its ability to create songs based on the styles of Beyonce and Taylor Swift.

AI is evolving faster than the courts can evaluate how laws apply to it.  The just-filed art litigation may provide some clarity; however, while in the fog, those creating AI-generated music are well-advised to stay cognizant of the legal risks and guide the artificial music making process with a genuine human touch.