Legal ambiguity in art created by artificial intelligence adds confusion to the debate.
seeing is believing. But what about fully machine-generated images?
This is a question scholars, advocates, and internet users have been pondering lately as art generated by artificial intelligence (AI) explodes in popularity. Some commentators have questioned who regulates this digitally-created art, and whether courts can prevent creative ideas and techniques from being stolen in the process of its creation. .
But the reality is that there are few regulations protecting copyrighted material used to train these AI-based techniques, and the images used to create AI-based art are poorly protected for privacy. Proponents have called for a regulatory solution rooted in copyright and privacy law.
Towards the end of last year, the popular use of the Lensa AI app to generate stylized portraits based on user-uploaded selfies fueled the latest controversy over the ethics of AI-generated art. The debate over AI art has been raging since early last year, when other popular AI models such as DALL-E 2 and Stable Diffusion quickly gained popularity.
Some commentators say these programs have made art more accessible. Stable Diffusion generates images for free based on a set of user-entered text, and Lensa sells portraits for just $3.99. A queer user of Lensa shares that the avatars created by the app, which allows users to specify their gender, are both pleasurable and consistent with their true gender identity.
Stable Diffusion created the image above in response to the prompt “Essay on Artificial Intelligence Regulation”.
However, many others have expressed concerns stemming from the mechanism such algorithms use to generate new images. , to train an AI algorithm on the relationship between textual and visual representations. For example, Stable Diffusion trained its algorithm on a data set collected by the German nonprofit LAION, which has collected billions of images with captions from art shopping sites and websites like Pinterest.
And because it was done without consent, artists and supporters have raised copyright concerns.
One of the artists, Greg Rutkowski, has reportedly complained that AI-generated images mimicking his art drown out his own work. Users appear to have prompted his Stable Diffusion with text containing Rutkowski’s name about 100,000 times as of September 2022.
However, LAION does not take any copyright responsibility for the use of images. Not sure if that is correct. The Copyright Act of 1976 gives the copyright owner of a work of art the exclusive right to reproduce and adapt that work. But for someone to be held liable for infringing their right to reproduce an image, they must make a modified copy.
The court also found that intermediate copies generated in just 1.2 seconds were poorly modified, suggesting that intermediate copies used to train machines in AI programs could be liable. I am questioning what.
A copyright owner may not do well to claim adaptation right infringement due to fair use doctrine. Under fair use, if the work is sufficiently transformable, i.e., if it somehow changes the meaning or message of the work, or has a different purpose, it can be protected by copyright without the permission of the copyright owner. You can create new works based on existing works.
Traditionally, fair use has presented a relatively low bar for those claiming it. However, the U.S. Supreme Court may soon rule on fair use and adopt stricter standards.
In addition to voicing copyright concerns, commentators have also expressed privacy concerns over the use of personal and private images to train the AI.
The DALL-E 2 recently allowed users to upload the faces of real people. Stable Diffusion has operated without limits or moderation since its inception. Users have expressed concerns about their personal data being used. For example, one person discovered that LAION found his I learned that you were collecting personal medical images.
Lensa also came under scrutiny for its privacy policy, which allowed the technology to train its algorithms based on user-uploaded images. Prisma, which owns Lensa, claimed Permanently remove the user’s image after creating an avatar. In December 2022, we updated our privacy policy to clarify that we do not use personal data to train other Prisma AI tools.
Individuals with personal privacy concerns do not have many options. Residents of the European Union can file a General Data Protection Regulation (GDPR) complaint to request removal of images used on LAION. However, this only prevents future use of the image, it does not undo past use or training. And US users don’t even have such limited means. The LAION website only allows users to request image removal via email, has the same restrictions, and has no federal privacy protections similar to GDPR in the US.
Proponents have proposed various regulatory solutions. To address copyright issues, some experts argue that the U.S. Congress should pass AI-specific legislation. Others liken the problem to his early 2000s illegal file sharing, suggesting that legislators pursue broader licensing schemes for underlying works.
To mitigate privacy risks, companies running AI generators can self-regulate to prevent the use of personal data. However, some commentators argue that relying on this strategy alone is insufficient.
Other experts are calling for federal statutory privacy protections that would allow people to protest the use of their images by these platforms. There is also It is an enforcement tool used to combat illegal or malicious collection of personal data.
Regulation often lags behind technology. Even just months after the explosion in popularity of AI-generated art, the future regulation path remains unclear.