Imagine the life of an advertising executive and a scene from Mad Men is likely to come to mind: Don Draper snake-charming a pair of Kodak marketing executives with a perfectly crafted pitch about the emotional pull of nostalgia (“It’s delicate, but potent…”) in order to win the account for their new slide projector. “This device isn’t a spaceship,” Draper tells the entranced Kodak men of their slide carousel in one famous pitch from the television show. “It’s a time machine.”
Well, it turns out, those days have mostly gone the way of three-martini lunches, skinny ties, smoking in the office, and widely-tolerated workplace sexual harassment. In the digital era, instead of a high-stakes, high-wire act focused on high concepts, advertising has largely been reduced to a volume game. Marketing departments or creative agencies have to churn out dozens or hundreds of variations of digital ads for Facebook, Instagram, or web banners, each with slightly different imagery, display copy, and calls to action, and then conduct a series of A/B experiments to figure out what works for a particular target audience. It’s a slog.
A few weeks ago, I wrote about one company trying to use machine learning to take a bit of the drudgery out of this work, helping to automate the testing of different ads. Today, I want to talk about another: Pencil, a startup that is actually using A.I. to create the ads themselves. Based in Singapore, but with employees working remotely across the globe, Pencil automatically generate dozens of six, ten or 15-second Facebook video advertisements in minutes.
“The ad industry has been moving from big ideas to small ideas,” Will Hanschell, Pencil’s co-founder and chief executive officer, tells me. “Instead of a Superbowl ad, a multi-million dollar blow out once a year, it is increasingly about very small, online ads. And in that environment, you have to run 10 ads and throw out the nine that don’t work and start again with another 10. That has made the job unfun for a lot of creative people.”
Pencil hopes it can free up these creative folks to work on the big picture while A.I. does the rest. “It cuts videos into scenes, generates copy, applies animations and then uses a predictive system that looks at variety and tries to determine what feels most on-brand and looks similar to things that have worked in the past for the brand,” Hanschell says.
A company gives Pencil’s software the URL of its website, and that software automatically grabs the logos, fonts, colors and other “brand image information” found there to use in a business’s ads. It can use images from the website or a business can choose to provide the system additional images or video. It uses sophisticated computer vision to understand what is happening in an image or a video so that it can match that to ad copy. To write the copy itself, Pencil uses GPT-3, the ultra-large natural language processing A.I. built by OpenAI, the San Francisco A.I. research firm.
Hanschell says that when Pencil started out, using GPT-3’s predecessor, GPT-2, the ad copy it generated was usable only 60% of the time. Now, with GPT-3 and better understanding of how to use the existing web copy to prompt the system, Hanschell says the system generates usable copy 95% of the time. What’s more, the system can actually generate novel ideas, he says. For instance, for a company that sells protein powder, the system can come up with ideas around energy, but it can also come up with ideas about the morning ritual or fitness, he says.
I watched a demo of Pencil’s software in which it created a series of Facebook ads for an eyeglasses company. It came up with the tagline, “Your frames, your way,” as well as, “Your wildest looks, perfectly crafted,” each paired with appropriate still images. Not exactly Don Draper. But not bad. And as Hanschell points out, in the volume game of today’s digital advertising jungle, plenty good enough to start acquiring customers.
What’s more, the system can provide a prediction for how good a particular ad will do compared to what the company has run in the past. For instance, it forecast that the “Your wildest looks, perfectly crafted” ad would do 55% better than previous ads the same company had run. That’s something most human ad executives can’t do.
Pencil is already being used by about 100 companies, including some big multinationals such as Unilever. It is a good example of a new generation of products—and even whole businesses—that are being made possible by rapid advances in natural language processing, or NLP. (For more on this, check out the latest episode of Fortune’s Brainstorm podcast. Also, last year, my Fortune colleague David Z. Morris wrote about several other companies using A.I. to automatically craft or refine digital ads. )
But at the same, a growing number of ethical concerns are being raised about these underlying NLP systems. For instance, GPT-3, despite all of its seeming power, still fails simple tests of common-sense reasoning. It also has a problem with bias: Because it was trained on the entirety of the Internet, there’s a good chance it may have picked up a tendency to write sexist or racist prose.
One area where OpenAI itself has already acknowledged a problem: The system can exhibit a clear anti-Islamic bias, with a tendency to depict Muslims as violent. A recent paper by two researchers at Stanford found that in more than 60% of cases, GPT-3 associated Muslims with violence—and that the system was more likely to write about Black people in a negative context.
This lead the tech journalist David Gershorn, who covers A.I. for tech site OneZero, to question why OpenAI would allow it to be used in a commercial setting and why OpenAI’s investor and partner, Microsoft, would be incorporating GPT-3’s capabilities into its own products. How broken does an A.I. system have to be, Gershorn asked, before a tech company decides not to release it?
I asked Hanschell about the problem of potential bias. He noted that OpenAI had developed filters that screened out some of the worst examples. And he said that in Pencil’s case, no ads are ever run without a human approving them first. “One of the principals of this is that we wanted a human to be in control at all times,” he says.
So I guess maybe we can’t get back to those three-martini lunches quite yet. There’s still work for us to do.
With that, here’s the rest of this week’s A.I. news.