The $1 Trillion Question
The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.
Regulation needs to be proactive. Here’s two ways that can happen.
AI regulation is on its way. Sundar Pichai, CEO of Google, generated a lot of buzz recently with an op-ed he wrote for the The Financial Times calling for greater regulation but that it should be “a relatively light touch that recognizes the need to avoid a one-size-fits-all approach.” Tony Blair, the UK’s former PM, is on the record as saying that big tech - and by implication AI - is a utility and should be regulated accordingly. The US government has called for public input (due March 13) on the regulation of AI, where, right now at least, the focus is on staying in the lead without impinging on the rights and values of US citizens.
Regulation is going to get stuck. Because of how AI actually works, accuracy and fairness are in competition. An AI will be racist or sexist or unfair simply as a natural by-product of optimizing for accuracy or profit. It won’t deliver “fair” or “trust” or “non-discriminatory” on its own. What makes AI regulation so difficult is that, although in theory there are many existing laws that can work just fine for AI, in practice there’s a missing piece.
AI outpaces humans. It takes time for people to see that damage has been done and it’s not possible to see harm as it’s happening. Reactive regulation will be ineffective because it leaves regulators without any technical way to regulate. All they have are organizational and human responses - oversight committees and corporate processes.
Regulation needs to be proactive. Here’s two ways that can happen.
The Artificiality Weekend Briefing: About AI, Not Written by AI