From marketing to design, brands adopt AI tools despite risk

Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

Even if you haven’t tried artificial intelligence tools that can write essays and poems or conjure new images on command, chances are the companies that make your household products are already starting to do so.

Mattel has put the AI image generator DALL-E to work by having it come up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same “generative” AI technology that powers the popular chatbot ChatGPT.

Meanwhile, Snapchat is bringing a chatbot to its messaging service. And the grocery delivery company Instacart is integrating ChatGPT to answer customers’ food questions.

Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t detailed exactly how it plans to deploy the technology, the move reflects the growing pressure on businesses to harness tools that many of their employees and consumers are already trying on their own.

“We must embrace the risks,” said Coca-Cola CEO James Quincey in a recent video announcing a partnership with startup OpenAI — maker of both DALL-E and ChatGPT — through an alliance led by the consulting firm Bain. “We need to embrace those risks intelligently, experiment, build on those experiments, drive scale, but not taking those risks is a hopeless point of view to start from.”

Indeed, some AI experts warn that businesses should carefully consider potential harms to customers, society and their own reputations before rushing to embrace ChatGPT and similar products in the workplace.

“I want people to think deeply before deploying this technology,” said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the major tech providers that recently released a set of recommendations for companies producing AI-generated synthetic imagery, audio and other media. “They should play around and tinker, but we should also think, what purpose are these tools serving in the first place?”

Some companies have been experimenting with AI for a while. Mattel revealed its use of OpenAI’s image generator in October as a client of Microsoft, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft’s cloud computing platform.

But it wasn’t until the November 30 release of OpenAI’s ChatGPT, a free public tool, that widespread interest in generative AI tools began seeping into workplaces and executive suites.

“ChatGPT really sort of brought it home how powerful they were,” said Eric Boyd, a Microsoft executive who leads its AI platform. ”That’s changed the conversation in a lot of people’s minds where they really get it on a deeper level. My kids use it and my parents use it.”

There is reason for caution, however. While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on a huge trove of digital art and photography have raised copyright concerns from the original creators of those works.

“For companies that are really in the creative industry, if they want to make sure that they have copyright protection for those models, that’s still an open question,” said attorney Anna Gressel of the law firm Debevoise &Plimpton.