The New York Times

Artificiality Co-founders, Helen and Dave Edwards, gave a presentation on AI and Marketing for the Global AI Advertising Summit at The New York Times.

The New York Times AI Global Summit

We spoke at The New York Times on AI & Marketing for a global summit of The New York Times advertising team. Scroll down to read the case study below.

đź’ˇ
Helen and Dave delivered a brilliant session for our 500-person advertising team at The New York Times. For businesses not inherently rooted in technology, there is perhaps no phenomena as alluring in its potential, inevitable in its advance and utterly terrifying by virtue of its technical and moral complexity as artificial intelligence. Dave and Helen delivered a uniquely engaging session that addressed this exact complexity and everyone came away from the workshop more optimistic about the role AI will play in their business moving forward.” â€”Zazie Pence, Talent and Culture, The New York Times
Helen and Dave Edwards speaking at The New York Times

Case Study

As AI diffuses through the global economy, companies are looking to make their workforces “AI-ready.” Digital transformation, big data, analytics, and the cloud all enable new services and faster innovation. For those who deal in data all day, every day—the “data-natives”—this transformation is a way of life. But for others—the “data-curious”—it can be daunting to keep up. Still others—the “data-deniers”—find it a challenge to see where they fit in.‍

Our answer is that everyone has a role in an AI-first company. The real challenge is to get everyone to a common baseline—where everyone in the company understands the power, reach, promise, and perils of modern AI—and is able to contribute to innovation and operational practices with an eye to the AI workflow.

When The New York Times approached us to host a session on AI-readiness for the ad team we knew we were going into an organization that had sophisticated AI already at work. The ad innovation team at The Times had spent two years developing audience models that could then be offered to advertisers as contextual targeting tools. This included using panel-based data to construct an algorithm that scores all The New York Times articles against 18 different emotions such as “curious” or “optimistic.” The technology also predicts how likely an article is to motivate a reader to take a particular action such as making a charitable donation, embarking on a dietary change, or spending a significant amount of money.‍

But building an AI-ready workforce involves much more than having strong data science teams, abundant data, and an AI-ready technology platform. True AI-readiness means having employees at all levels and in all types of roles who understand how machines learn. It means having employees who can spot opportunities to craft new workflows, products, and services that use the best of humans and machines—including being able to intervene when it’s going wrong.‍

We started by asking a simple question. What is AI?

We love this question because it instantly reveals people’s perceptions. The Terminator. Autonomous, weaponized drones. Amazon’s recommendation algorithm. Robot pets. Chatbots. Google’s search algorithm. Apple Maps. AI is used for both good and bad, it’s ubiquitous, incredibly useful, and it’s not always right. It’s an everyday thing.

Next we asked the team: What worries you about AI? 

People were well informed about AI risks. Clearly they follow the headlines! Amazon’s hiring algorithm abandoned due to bias against women. COMPAS’ recidivism algorithmunder fire because it’s biased against blacks.

Facebook’s discriminatory housing ads. The list goes on.

But then the big question: What can be done?

In most organizations, fixing machine bias is left to the technologists. That is if it’s done at all. Our approach at Sonder Studio is different. The best fix for AI bias is more holistic. 

It’s a diverse team, operating a robust process that includes both technical and non-technical fixes, tackling design and operational issues such as key aspects of UX design (say, adding prompts that help users understand correlations between so-called “neutral variables” and protected classes), important tradeoffs (such as the tradeoff between fairness and accuracy when different user groups have different base rates) and appropriate remedies and controls when things go wrong (who is the “human-in-the-loop?).

At The Times, we found a thirst for understanding these issues—on behalf of readers, advertisers, and for staff. But we also saw something deeper—a level of individual responsibility to take on the challenge of understanding machine bias. 

Machine bias, as with human bias, can distort the truth and interfere with our progress toward a more just society. Those who communicate with society now need to have a working knowledge of AI bias, in addition to the confidence and authority to tackle it.

Nothing could be more true to The New York Times brand.

đź’ˇ
Are you interested in a presentation or workshop on AI and complex change for your organization? Get in touch.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.