The horrifying episode in which Elon Musk’s Grok chatbot generated and posted millions of sexualized images of real people, including women and children, has a clear lesson: It should be illegal to use anyone’s photograph to create a fake image intended to depict that person.
Last summer, Congress passed the Take It Down Act, which prohibits posting deep fakes that depict people engaged in intimate sexual acts. Now Congress should expand the act to cover any misappropriation of a person’s likeness.
This can be accomplished in a manner consistent with the First Amendment. There is a long-standing common-law right of individuals to control the commercial use of their image and to prevent others from using it for gain. That right should provide a basis for outlawing the kind of image appropriation that occurred on Grok.
It will be necessary to have exceptions for political commentary or satire. But fake images should not count as newsworthy for First Amendment purposes. Moreover, the benefits of protecting people from the misappropriation of their images outweighs the risks of chilling the lawful publication of news images.
What makes the Grok situation so immediately upsetting is that it permitted anyone to produce salacious images of anyone at any time. It is also clear that market pressures alone will not suffice to stop the practice. Even if Grok has made changes that make it more difficult to produce such images, publicly available, open-source AI can, in principle, be used to achieve the same result.
This state of affairs can’t be right, and the law must find a way to protect against it. The violation is connected to privacy in the sense that it certainly feels like an invasion of privacy to be depicted naked in an image that looks like a photograph.
Just as there is a common-law right against somebody taking my image without my consent and using it to promote their own product or otherwise make money, there should also be a legal right against somebody using my image without my consent for their own purposes. AI-generated images of made-up people would not be protected under this legal principle. The protection would, however, extend to an actually identifiable person.
To ensure that free speech is protected under such a law, the statute should exclude constitutionally protected speech. Editing a news photograph in a way that doesn’t misrepresent the original image shouldn’t be illegal, because a ban might chill the editorial process.
The Supreme Court has also protected parodies under the First Amendment, treating them as exempt from intellectual property restrictions such as copyright law. Parody images based on photographs of real people should therefore also be protected, to the extent that they aren’t intended to mislead the viewer but merely to comment on public figures and matters of public concern. The law should allow you to make all the memes you want using images of any politician or other public figure. The law should be restricted to the appropriation of photographic images or to AI-generated images that are indistinguishable from photographs.
I am a strong defender of First Amendment rights. AI-generated words and images deserve the same protection as any other forms of speech or expression. But the First Amendment has never been understood to protect the misappropriation of one’s image by another. Your image is your property. And if you can’t stop that image from being taken without your consent and transformed into something you don’t want, you don’t really own it.