The Turing Test

One of the most infamous tests of machine intelligence remains the Turing Test, developed by Alan Turing—the father of modern computing, memorably played by Benedict Cumberbatch in the 2014 film 'The Imitation Game'.

A machine will pass when it’s able to play this game, or mimic human behavior well enough to fool us into believing it’s flesh and blood.

The same year the movie was released, the Turing Test was beaten by a chatbot named Eugene Goostman, who posed as a Ukrainian teen conversing in choppy English. The crowd went wild for a solid hour before realizing that the bot, far from achieving human-like intelligence, was actually quite dumb.

Today, most chatbots could pass, but it’s fair to say that even the cream of the crop are hardly deserving of straight As and gold stars.

Turing’s methodology has been criticized for reducing AI to the level of a magic trick—and being far too simplistic. But we should remember that the test is a product of its time, and its continued relevance is surely a credit to Turing’s lasting legacy.

The Lovelace Test

Enter the Lovelace Test, appropriately named after a woman representing the intersection of creativity and computing. Ada Lovelace was the daughter of Romantic poet Lord Byron and worked alongside Charles Babbage on his somewhat doomed Analytical Engine project in the mid-1800s.

She argued that a robot had to transcend its pre-programmed instructions, or go off script, in order to be considered intelligent.

The Analytical Engine has no pretensions whatever to originate anything. It can only do whatever we know how to order it to perform.
Ada Lovelace

The Lovelace Test was designed in 2002 by a team of computer scientists that included David Ferucci, who went on to develop IBM’s Jeopardy-winning Watson.

It can only be passed if the machine is able to generate an original idea without its human creator being able to explain how it did so. This would prove that it’s capable of thinking for itself; going beyond its own code.

While useful, the test fails to acknowledge that the AI’s novel creation may have been nothing more than a fluke.

A recent modification proposed by Georgia Tech professor Mark Riedl—Lovelace 2.0—introduces random constraints, such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.” The ‘judge’ is a person with no involvement in programming the AI, and an awareness that they are interacting with one.

The AI must create by design, not chance, à la the Infinite Monkey Theorem (which states that a monkey randomly hitting keys on a typewriter for an infinite chunk of time will eventually reproduce the full works of Shakespeare).

Unsurprisingly, Lovelace 2.0 is much trickier as it demands an understanding of what’s being asked, and importantly, of the semantics of the data being drawn from.

Can artificial intelligence ever match the real thing?

A growing number of AI researchers and academics are using creativity—the ability to think laterally, make unusual connections, and produce original output—as a proxy for intelligence. This is an interesting shift away from measuring an AI’s success solely via its computational or mechanical skills.

In the creative industries, the use of artificial intelligence is now increasingly ubiquitous: AI can make movie trailers, invent wacky recipes (Artichoke Gelatin Dogs, anyone?), paint like Rembrandt, and write cheeseball pop ballads.

It may play the role of the artist or, in a commercial context, the designer or marketer’s right-hand man. For instance, Bynder uses AI to power its search functionality and do the heavy lifting of tagging images for our clients.

While this is all very impressive, it’s hard to imagine how an AI could ever pass the Lovelace 2.0 test with flying colors.

To date, one of the most lauded milestones in machine learning is Google’s Artificial Neural Network (ANN) teaching itself to recognize a cat.

Yet, it’s still light years away from matching human intellect. The ANN can only perform tasks that are first ‘mathematized’ and coded. Fundamentally human traits like humor, empathy and shared understanding—also known as social cognition—have proved resistant to mathematical formalization.

That’s why it’s challenging to teach a machine the cultural subjectivity and sensitivity required to write a basic newspaper article, let alone a bestselling novel.

Testing natural intelligence is complex enough; AI is a whole new ball game. But maybe that’s the point. Such tests may not be valuable for the results they give—they serve to put AI development into perspective and prompt us to rethink the standards we hold AI to.

If free will and individuality are part and parcel of intelligence or creativity, it’s difficult to conceive of human-programmed machines ever making the cut.

Maybe we should focus on the more practical applications of limited AI (like Google Duplex), instead of the rather existential pursuit of a self-conscious machine that’s just as smart as us, or replicates how we think, feel and create.

Want to learn more about the future of deep learning and AI in digital asset management and the creative industries? Download our free guide here.

Join the Bynder community Subscribe