We need to modernize AI regulation

Regulation needs to be proactive. Here’s two ways that can happen.

Abstract image of a courthouse

AI regulation is on its way. Sundar Pichai, CEO of Google, generated a lot of buzz recently with an op-ed he wrote for the The Financial Times calling for greater regulation but that it should be “a relatively light touch that recognizes the need to avoid a one-size-fits-all approach.” Tony Blair, the UK’s former PM, is on the record as saying that big tech - and by implication AI - is a utility and should be regulated accordingly. The US government has called for public input (due March 13) on the regulation of AI, where, right now at least, the focus is on staying in the lead without impinging on the rights and values of US citizens.

Regulation is going to get stuck. Because of how AI actually works, accuracy and fairness are in competition. An AI will be racist or sexist or unfair simply as a natural by-product of optimizing for accuracy or profit. It won’t deliver “fair” or “trust” or “non-discriminatory” on its own. What makes AI regulation so difficult is that, although in theory there are many existing laws that can work just fine for AI, in practice there’s a missing piece.

AI outpaces humans. It takes time for people to see that damage has been done and it’s not possible to see harm as it’s happening. Reactive regulation will be ineffective because it leaves regulators without any technical way to regulate. All they have are organizational and human responses - oversight committees and corporate processes.

Regulation needs to be proactive. Here’s two ways that can happen.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.