Thursday, May 14

For many years, copyright law was based on the simple principle that the creator is the owner of the work. Once thought to be self-evident, that premise is now entangled in a philosophical and legal conundrum, where algorithms imitate creativity and produce results that remarkably resemble human expression.

When a federal court upheld the U.S. Copyright Office’s stance in Thaler v. Perlmutter, the tension increased. Dr. Stephen Thaler, the plaintiff, had attempted to assert copyright on behalf of an artificial intelligence system he developed, claiming that his machine—aptly named the Creativity Machine—should be acknowledged as the creator of a piece of visual art. The court strongly disagreed, arguing that current U.S. law only recognizes humans as authors.

Legal IssueSummary
Human Authorship RequirementOnly human-created works are eligible for copyright, per U.S. law and recent court rulings.
AI-Created Content StatusCreations without substantial human input fall into the public domain and lack protection.
Thaler v. PerlmutterCourt ruled AI cannot be listed as an author for copyright purposes.
Copyright Training Data LawsuitsOngoing lawsuits argue AI firms used copyrighted data without consent during training.
Future Legal ConsiderationsPotential new legal categories like sui generis rights or clearer human-input thresholds.
SourceBased on reports from NYSBA, EFF, and legal filings through 2025–2026.

Although anticipated, this decision did more than simply close a legal avenue. It shed light on a more general discussion about what authorship entails when creation is limited to lines of code rather than fingertips. Anything created without significant human input, such as a song, story, or illustration, is considered public domain under current policy. This implies that it is freely and lawfully usable by anybody.

However, when AI is involved, few problems remain clear-cut. Every day, a scenario that is becoming more and more common occurs: a designer inputs prompts into an AI image generator, modifies the outputs, selects variations, and adds finishing touches. Should the finished product be protected since it may appear to be completely handmade?

Such cases are being carefully considered by copyright offices and courts. The threshold of “substantial human contribution” is the crucial criterion, albeit it is still imprecise. The work may be eligible for protection if a user significantly influences the result through input, guidance, or editing. The question of how much is “enough,” however, is still incredibly vague.

I recently attended a digital rights panel where a photographer described how he recreated a vision he had carried for years using generative tools. He claimed that “the software didn’t think of it.” “But it brought me closer to the emotion I was pursuing.” The emotional clarity of that moment, rather than its legal significance, is what made it stick with me.

The training methods used in AI systems are at the center of the current debates. Developers feed models large datasets, frequently scraped from the internet, to teach them how to write, draw, or compose. Millions of copyrighted works, including books, images, articles, artwork, and recordings, are stored in these repositories. Creators are now retaliating.

For example, Stability AI was sued by Getty Images for using its photos to train image generators without a license. The New York Times has sued OpenAI and Microsoft in a similar manner, claiming that millions of its articles were unintentionally incorporated into their language models. The plaintiffs in both cases assert that the AI tools now produce outputs that either mimic or take the place of their original content.

These are not singular instances. Artists, writers, and musicians are claiming that AI companies have infringed intellectual property rights by harvesting creative works under the pretense of fair use in a number of jurisdictions. Their argument is straightforward: when the outcome is commercialized, machines do not have the same privilege to learn as people do.

Developers argue that AI training is similar to human learning in that it does not replicate final works but rather absorbs patterns, styles, and grammar. They maintain that layers of abstraction and probability are used to transform the majority of outputs. However, it becomes more difficult to defend fair use when an AI replicates something strikingly similar to a work protected by copyright.

A subtle but significant redefining of authorship, compensation, and value is taking place. A few lawmakers have noticed. In 2024, Congress introduced the Generative AI Copyright Disclosure Act, which would require businesses to make the datasets they use to train AI systems publicly available. This could give artists a way to object or negotiate—tools they don’t currently have.

Other countries are developing their own strategies in the meantime. The proposed AI Act from the European Union mandates that businesses log their training data and evaluate the risk of their AI models. While advocating for increased accountability for commercial firms, it even makes exceptions for academic research and nonprofits. These initiatives suggest a nuanced, rather than reactive, regulatory future.

Lawsuits are not the only thing at stake. The economics of AI may completely change if training methods are found to be infringing, forcing developers to license large volumes of content. Preemptive deals are already being made by some companies. Disney, for instance, has granted strict licenses to AI developers for portions of its archive. It’s a practical decision: monetizing access is preferable to fighting in court.

New legal ideas are being contemplated concurrently. One proposal is to establish a new type of intellectual property that is unique to AI-assisted works and does not fall under either copyright or patent. A hybrid effort would be acknowledged by these sui generis rights, which would provide limited protections to works that rely on machine processes but have substantial human direction.

However, a number of legal experts caution that this middle ground may be especially perplexing. In the absence of international agreement, the same image might be protected in Brussels but used without restriction in Boston. This fragmentation presents significant risks for companies that operate across geographical boundaries.

From an ethical perspective, the discussion is equally complex. Even if there isn’t a direct copy of an artist’s style, should they still be paid? Does teaching an AI model from a copyrighted book automatically diminish the value of the author’s work? These inquiries compel us to reevaluate whether we should reward process, outcome, or both.

What about the creators themselves, who are utilizing AI as a collaborator rather than a replacement? Many are not well-known or tech behemoths. They are students, dreamers, freelancers, and small teams experimenting with tools that, amazingly, broaden their reach. They see the problem as more than just a legal one. It has to do with legitimacy.

Congress can encourage innovation without compromising protection by defining authorship boundaries. Establishing guidelines that differentiate between someone who is actively influencing digital expression and someone who is merely clicking “generate” has merit. In this instance, fairness depends as much on intent as on technique.

Legal frameworks must consciously, rather than defensively, change as AI develops further. It is not only technically significant to acknowledge the human labor that goes into prompting, editing, and vision. It honors creative work regardless of the instruments used.

The goal of this legal journey is not to oppose machines. It’s about making sure that the human spark that drives machine creation is not overlooked or thrown away. In the era of AI, ownership is still up for debate, but one thing is certain: creativity is still a shared space that is messy, dynamic, and distinctly human.

Share.

Comments are closed.