With executive order, White House tries to balance AI’s potential and peril
How do you regulate something that has the potential to both help and harm people, that touches every sector of the economy and that is changing so quickly even the experts can’t keep up?
That has been the main challenge for governments when it comes to artificial intelligence.
ADVERTISING
Regulate AI too slowly and you might miss out on the chance to prevent potential hazards and dangerous misuses of the technology.
React too quickly and you risk writing bad or harmful rules, stifling innovation or ending up in a position like the European Union’s. It first released its AI Act in 2021, just before a wave of new generative AI tools arrived, rendering much of the act obsolete. (The proposal, which has not yet been made law, was subsequently rewritten to shoehorn in some of the new tech, but it’s still a bit awkward.)
On Monday, the White House announced its own attempt to govern the fast-moving world of AI with a sweeping executive order that imposes new rules on companies and directs a host of federal agencies to begin putting guardrails around the technology.
The Biden administration, like other governments, has been under pressure to do something about the technology since late last year, when ChatGPT and other generative AI apps burst into public consciousness. AI companies have been sending executives to testify in front of Congress and briefing lawmakers on the technology’s promise and pitfalls, while activist groups have urged the federal government to crack down on AI’s dangerous uses, such as making new cyberweapons and creating misleading deepfakes.
President Joe Biden’s executive order tries to chart a middle path — allowing AI development to continue largely undisturbed while putting some modest rules in place, and signaling that the federal government intends to keep a close eye on the AI industry in the coming years. In contrast to social media, a technology that was allowed to grow unimpeded for more than a decade before regulators showed any interest in it, it shows that the Biden administration has no intent of letting AI fly under the radar.
The full executive order, which is more than 100 pages, appears to have a little something in it for almost everyone.
The most worried AI safety advocates — like those who signed an open letter this year claiming that AI poses a “risk of extinction” akin to pandemics and nuclear weapons — will be happy that the order imposes new requirements on the companies that build powerful AI systems.
In particular, companies that make the largest AI systems will be required to notify the government and share the results of their safety testing before releasing their models to the public.
These requirements will be enforced through the Defense Production Act, a 1950 law that gives the president broad authority to compel U.S. companies to support efforts deemed important for national security.
© 2023 The New York Times Company