We Need to Change How We Measure AI

Suddenly artificial intelligence shows signs of being smart like humans. AI has been advancing so swiftly that leading AI researchers and entrepreneurs predict that AI will soon surpass humans on all cognitive tasks—a milestone commonly referred to as Artificial General Intelligence, or AGI.

AI has surpassed humans at a number of tasks and the rate at which humans are being surpassed at new tasks is increasing.

Suddenly artificial intelligence shows signs of being smart like humans. AI has been advancing so swiftly that leading AI researchers and entrepreneurs predict that AI will soon surpass humans on all cognitive tasks—a milestone commonly referred to as Artificial General Intelligence, or AGI. 

If you’re worried about being pushed aside by AI maybe there’s reason to doubt such predictions. How progress is measured comes with a built in bias that may skew the reality of a generally intelligent AI. Measures that use human tests may both overstate AI abilities compared to humans while also disguising the real nature of intelligence altogether. 

The metrics for benchmarking machine performance largely hinge on tests designed for humans. This anthropomorphic approach raises two problems. First, tests for human intelligence are an incomplete and biased measure—they are culturally determined and inherently limited by the process of metrification itself. Second, tests designed for humans might not be the right tests for machines.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.