This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
Before the NYT broke the story on Clearview AI, facial recognition was seen by most people as a natural extension of tech in general. It added convenience or it was used by trusted parties to keep us safer.
If you’ve got nothing to hide, you’ve got nothing to fear. But this could not be more wrong.
Facial recognition has been creeping up on us for a while now. We love facial recognition on our phones but our phone is a database of one. We’ve supplied our faces to government for good use - driver’s licenses, passports and Global Entry are obvious examples. But our faces-as-data have bled from this narrow, context-dependent use and have been repurposed for use in other databases where facial recognition technology is routinely deployed.
Facial recognition is a unique technology. There is no other technology that so fundamentally breaks the concept of anonymity in a public space. In public, we take for granted that we have a right to choose who to introduce ourselves to. Others around us have limited capacity to perceive, characterize and remember us. Facial recognition destroys the obscurity we rely on. When we lose obscurity we lose our choices about how much of ourselves to show to others.
Nothing (outside of Yandex, a Russian image search engine) comes even close to Clearview AI. It’s a search engine for faces. And the company poses a material threat to who we are, who we think we are and who we hope to be. All the while, the company asks us to “just trust them.” “Trust” is…
…Why I’m here on TV explaining all these things and that’s why we meet with a lot of the people in government. - Ton That, CEO Clearview AI.
I’m not sure what he was explaining in the interview. Accuracy perhaps? Apparently Clearview AI is 99.6% accurate. Accuracy is a core component of trustworthiness in AI. If the AI is not accurate enough, or if its accuracy is uneven and biased against certain groups, that’s a reason to worry. In the case of Clearview AI, the high accuracy is a reason to worry. The higher the accuracy, the lower chance to escape it.
When this system is misused, the potential harm will be higher when accuracy is higher. And if it’s everywhere and covers everyone, then history says it will be misused. Even if this misuse isn’t malicious, even if misuse is simply the gap between the developer’s intent and an unintended consequence, there are potentially catastrophic consequences for individual obscurity over the course of our lives.
Our faces are central to our identity. We must treat them as inalienable in the true sense of the word. But since we cannot (and do not want to) hide our faces to protect them from surveillance, there is only one way to assure that their inalienable quality is respected. This can only be done if we ban facial recognition. - Dorothea Baur
The power imbalance is extreme. In his CBS interview, Ton That, asserts that the software will not be available to the public “while he is still at the company.” Now our privacy hinges on one individual’s personal assurance and employment tenure. The leverage and amplification effect of AI is evident - one person in one company which offers one product has more influence than the billions on which the product acts. What happens when Ton That decides that his product is far more lucrative as a 24/7 real-time surveillance tool than it is as an “after-the-fact” search tool? What happens when he and his team then decides what is “right” and what is “wrong”? What new norms for “something to hide” could appear?
Clearview AI’s pedigree matters; Peter Thiel is an early investor. Thiel backs “big state” AI surveillance companies like Clearview and Palantir because he thinks the best way to protect us without becoming a police state is “to give the government the best surveillance tools possible.” Thiel believes that freedom and democracy are incompatible. Which is why we see him funding companies like Clearview - democracy can’t be relied upon to keep technology progressing and markets growing.
In a recent profile of Thiel’s ideological positioning; ““Progress” is always aligned with technology and the individual, and “chaos” with politics and the masses.” So Thiel backs technology that makes governments stronger, less chaotic, less democratic, less diverse and more likely to spend money on technology that preserves a techno-social power structure where individuals buy into the idea that the only role of government is public safety, and the best way to be safe is machine surveillance. Welcome to the messy masses under machine management.
A strong centralized state can restore order, breed progress, and open up new technologies, markets, and financial instruments from which Thiel might profit. And as long as it allows Thiel to make money and host dinner parties, who cares if its borders are cruelly and ruthlessly enforced? Who cares if its leader is an autocrat? Who cares, for that matter, if it’s democratic? In fact, it might be better if it weren’t. - Intelligencer
This story highlights that we are reliant on the big tech platforms to protect us. The many-billions of images that were scraped from social media as “public data,” while technically public, their extraction infringes on the T&Cs of the Twitter, Google, YouTube and Venmo. Clearview has been served cease-and-desist letters but this doesn’t mean that the images the company has will be deleted.
If you want to know if they have your face and would like the image deleted, you can if you live in Illinois where the Illinois Biometric Privacy Act will likely give you grounds to request your data are erased. Be prepared to supply a government-issued photo ID on your application.
State authorities are responding. The New Jersey Attorney General called for a temporary ban, saying that while he wasn’t against facial recognition per se, that “all law enforcement agencies in New Jersey stop using Clearview’s technology until we get a better handle on the situation.” Thank you, NJ.
This is truly a Black Mirror moment. Clearview may have inadvertently exposed what state surveillance is about in the age of AI. Under the guise of safety, we lose our right to be obscure, to be anonymous and thereby our right to a future of our own determination. As Shoshana Zuboff says in Surveillance Capitalism, “the real psychological truth is this: if you have nothing to hide, you are nothing.”
Also this week:
A must-check out - Facebook finally released their off-Facebook tool so you can now discover who sells your data to Facebook. It’s really worth a look. In an informal (and tiny) survey this week, we found the average for each user was around 100 companies selling their personal data to Facebook for ad targeting. It’s easy to check - go to your Facebook newsfeed settings and zen through settings&privacy, scroll down to Off-Facebook activity, log in (security feature) and then click on the brand icons. More details and discussion here.
Interesting perspective in Slate on AI ethics officers and how their presence won’t work because it implies that everyone else can’t be ethical.
Great interview between Stuart Russell and James Manyika from McKinsey. “How does one person improve the life of another? We know there are people who can do it. But, generally speaking, there’s no how-to manual. There’s no science. There’s no engineering of this. We put enormous resources, in the trillions of dollars, into the science and engineering of the cell phone, but not into the science and engineering of how one person can improve the life of another.”
Soul Machines and the NZ Police are experimenting with digital police officers as part of a strategy of digital tech enabling accessibility and inclusion. Check it out in the NZ Herald. If you like listening to a broad kiwi accent, you’ll want to put this on infinite loop.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.