“Emotion,” “intelligence,” “creativity” No, I’m not trying to sell you a luxury sedan or a Swiss watch. These are terms that are increasingly—and quite loosely—applied to machine learning. Applications and devices, we are told, can “sense” or “know” human emotions
like a person. The terms “artificial intelligence” and “AI” are being attached to practically anything that can respond to a database query at the moment, rendering the definition of what constitutes intelligence in machine form functionally meaningless. Poor Alan Turing’s initial concept of a test for machine intelligence
has been left drowning in the rush to ride the frothing white peaks of the hype curve.
And now creativity has come in for the same treatment. DeepMind’s doodles are positioned as art (a different, and quite interesting strain
, perhaps, but same as human art? Have we finished that debate?). Google’s AlphaGo matches of human player vs machine have been reported and retweeted with evangelical fervor, with oohs
about the creativity of AlphaGo’s play. Yet, is applying a novel combination of possible moves creative, or working through an (admittedly highly complex and adaptive) instruction set?
My point isn’t to set out firm definitions, but to ask if the conversations around these definitions are settled to the point where developers and marketers can claim victory conditions and move on. Roelof Pieters
and Igor Schwarzmann
both touched on this on Twitter recently. Hey, people have product to sell, I get it. But don’t we lose a bit (or a lot) when we let sloppy definitions slide through? How do we know when the real achievements have been made?